I very much agree with mickpitkin92 .
but I would go about it in a different way "Nothing is impossible for the man who doesn't have top do it himself"
First the problem of managing legacy commands/functions etc. - my suggestion would be that a switch back to legacy could be made on a more granular level
Rather than having humungous code base with lots of flags and tests everywhere - keep the 6.x branch as it is but make it access the Abstract Syntax Tree (AST) of the 8.x branch ( I'm assuming the parser works like most standard compilers)
Now in many cases when something will break it's probably in a couple of handlers/functions (don't you just hate generalisations?) so for instance the routines that accessed chars might break if you are using unicode.
If we have the equivalent of UseSystemDate which is in effect a local flag (and maybe we have different versions ie set UseUnicode to false which will probably be more use to desktop business programs)
The way I suggest it is implemented is using a variation of Steve Wozniak's "Sweet 16" interpreter for his integer basic http://en.wikipedia.org/wiki/SWEET16
In effect he would switch the 6502 to a pseudo 16bit chip - in effect a bytecode interpreter before it became de jour way of implementing languages.
With livecode we would switch to the "old legacy: interpreter to interpret that part of the code , and it would switch to the legacy code generator for emitting the "compiled" code.
So in the final standalone there would be no difference - code is code.
In the iDE though there would be a context switch that would run the code from the older code base.
With this scheme we could have the equivalent of python 2.x and python 3.x within the same codebase (taking the two codebases as 1 ).
Over time the "legacy" code could be tied into the NON legacy code i'e the code to emit the tokenizer/scanner would be a single bit (?) of code and data used by both "branches" , or the code emitter of the "legacy branch" would use as much of the "new code" that makes sense.
JUst to be more specific if a routine in the "old codebase" did a binary search on a part of memory then that code is used by both "codebases" so the extra baggage is less but also fewer places where bugs could creep in.
It would also mean that new additions wouldn't break the "old" code i.e . one of the many flag tests was not correctly set.
The idea is that we don't turn the new code into the old spaghetti code with so many tests and routines but also be able to do what the python team does which is keep updating the legacy without needing to backport - the best of both worlds.
So someone who had a system that didn't need any unicode for instance could have a global
at the top and do a context switch to
where there is something that they need from the new system. (this last bit could be totally hair brained, but it might give food for thought).
Edit: By global I mean that the compiler/interpreter would do the context switch whenever it "knew" that code that is unicode dependant was going to be executed. I am again making the assumption (I haven't looked at the code) that
there is a preprocess of the code before any interpretation or compiling takes place so the "context switching" can be added to the tokenized/compiled output and the switch back made at the end of the handler automatically.
I leave it to better men (or women
) than me to find the problems with speed here - maybe again the "global setting" is pie in the sky - but the fact that LC runs code from opver 20 years ago is a miracle in itself.
KIndest Regards Lagi