Retaining Backwards Compatibility in a Changing World

Moderators: Klaus, FourthWorld, heatherlaine, kevinmiller, LCMark

Livecode Staff Member
Livecode Staff Member
Posts: 1097
Joined: Thu Apr 11, 2013 11:27 am

Re: Retaining Backwards Compatibility in a Changing World

Post by LCMark » Fri Mar 13, 2015 8:44 am

@mwieder: The current parser can certainly be made to flag syntax which might be affected by changes, but things like type inference and data-flow analysis would be needed to advise about other changes (string <-> empty array conversions, for example). We are going to be re-hosting the current scripting language on the LCB VM at some point down the line which might open up greater opportunity here as there is (in that design) only one place where type-conversions and action executions occur. (The current LCS implementation uses hand-crafted C++ to type-check each argument of each piece of syntax and then dispatch it - thus meaning all that C++ code would have to be instrumented in order to offer any sort of analysis mode).

One option would be to have a run mode where the VM accumulates 'new execution mode' violations against lines of script. i.e. As the scripts run, and the VM notices things that are fine with compatibility mode on, but would not be with it off and logs them:

Code: Select all

on doBadThings pString
  combine pString with ","
end doBadThings
The implicit array <-> empty string conversion occurs at passing pString to 'combine', so it would be possible at the point the VM does the type-conversion to annotate that line with something along the lines of 'Warning: <array> argument of 'combine' forced converted from string to empty-array'. Realistically, due to the dynamic nature of LCS static analysis of this kind of thing can only go so far - so, the only way to 'find' all places which might cause problems would be through execution and inspection.

Posts: 10
Joined: Thu Jul 26, 2007 7:23 pm

Re: Retaining Backwards Compatibility in a Changing World

Post by slylabs13 » Mon Mar 23, 2015 3:42 pm

I think getProp/setProp should remain as they are. They are customer *properties* not custom *handlers*. Built in properties can be set/gotten when messages are suppressed. Why confuse things? If you don't want them to fire, don't use a custom property, or else check to see if messages are locked before proceeding with your handler.

Posts: 10
Joined: Thu Jul 26, 2007 7:23 pm

Re: Retaining Backwards Compatibility in a Changing World

Post by slylabs13 » Mon Mar 23, 2015 3:44 pm

"whether an array's keys are considered case-sensitive should be a property of the array, and not based on a local property"

There is already a command for case sensitivity. Does this not work with array keys as with everything else? Why make arrays a special case? That will just introduce more confusion to new users.

Posts: 10
Joined: Thu Jul 26, 2007 7:23 pm

Re: Retaining Backwards Compatibility in a Changing World

Post by slylabs13 » Mon Mar 23, 2015 3:56 pm

LCMark wrote:As LiveCode moves forward we are increasingly being 'held back' to a certain degree with the need to retain (as far as is practical) existing script compatibility.
[*]revXML and revDB functions should be returning (abstract) strings and not raw data
[*]'the result' not being a global variable and being local to handlers
[*]error handling being unified as exceptions[/list]
Not sure what an abstract string is. I thought a string was a string. However, remember that Livecode will do some conversions when converting data to strings. Do we really want that?? I store the binary to re-create PDF forms in a binary large blob column. Convert it just a tiny bit and my app is destroyed.

I wasn't aware the result was local, but does it really matter? It changes often enough that users should be in the habit of storing the result into whatever variable they like. If users want the result to be global, store the result in a global variable. If they want it to be local, store it in a local variable. If they want it one way one time and one way another... well you get the drift. It already works that way.

If by "unified as exceptions" you mean I can create a single handler like "on exception pErrorCode" then that has some merit. Some errors like database connection errors would be easier dealt with using a method like this, rather then having to put a try catch construct every time I use a database function of any kind. But I don't think they should be mutually exclusive. Sometimes try catch is a very convenient way of isolating a particular bug and having to determine which line in which script caused the error from a unified exception handler would be less than convenient.

Posts: 10
Joined: Thu Jul 26, 2007 7:23 pm

Re: Retaining Backwards Compatibility in a Changing World

Post by slylabs13 » Mon Mar 23, 2015 4:06 pm

jamsk wrote:Change is painful, and letting go of the past is difficult, but I say unshackle the team and move forward. Keep the legacy engines available for older code to run on older systems, but don't let compatibility issues interfere with moving the platform forward. OSes will continue to evolve, hardware will continue to evolve, and so must coding. Just don't 'pull an Apple' and suddenly abandon older technology.
I am perhaps not understanding the various points all that well, but it seems to me most of these points can be addressed already. I mean, why the bejeepers do we need case sensitivity in our code anyway?? Why would I ever want a key called "myKey" and a totally separate key called "Mykey"? Why can't devs simply put the result into whatever kind of variable they like? Most of these points seem to me to be much ado about nothing.

But I agree with Mark on last delimiters. If I put a delimiter at the end of a string, I will thank the engine very much to *not* assume I meant nothing by it. As any developer has long recognized, "empty" is not the same thing as "null".

VIP Livecode Opensource Backer
VIP Livecode Opensource Backer
Posts: 230
Joined: Tue Jun 30, 2009 11:15 pm

Re: Retaining Backwards Compatibility in a Changing World

Post by mickpitkin92 » Thu Mar 26, 2015 2:44 am

If I might interject and I apologise if this has already been mentioned earlier in the thread, but perhaps the changes you suggest such as those to setProp/getProp and the Result, should be postponed until a time when the language will get overhauled as was planned on the roadmap back in 2013. From what memory serves, one of the stages on the roadmap was the Open Language project which I believe Mark said at the time would be a revamp of the language to be cleaner and more modular, combined with a virtual machine addition of sorts in the engine, via a library or built in, that would enable legacy stacks to run on the new engine similar to how Rosetta allowed PPC apps to run on the Intel based OS X machines back in 2006 for a short while.

Basically this would allow you to say "Right, 9.0 is going to be the start of the Open Language milestone, everything developed before 9.0 is considered legacy and run via the virtual machine and everything from 9.0 and up is Open Language, any changes we do here will be incompatible with the legacy stuff." and then perhaps during compile time (Us compiling a stack, not C++ compilation) the standalone settings dialog could have an option that says "Enable support for pre-9.0 stack files" (I'm using 9.0 as a placeholder here) that would basically tell the standalone engine to load the virtual machine to allow the legacy stuff to run, if the option isn't enabled and thus the virtual machine isn't available, attempts to load legacy stacks would yield an error.

Alternatively as Mark has mentioned, perhaps the StackVersion property he has in mind could be used in a similar way to how Microsoft is dealing with Win32 compatibility in Windows 8, where backwards compatibility breaking changes can be introduced into a new version but existing apps will be treat as if the OS never left Windows 8 until the developer comes and updates the app and declares to the OS that it supports it via the application manifest. It's been brought to light recently with Windows 10 with the NT kernel being upped to NT 10.0, given that LiveCode 7 doesn't declare support for Windows 8.1 or 10 in its app manifest, calls to the SystemVersion return NT 6.2 and thus LiveCode thinks its running on Windows 8.

I'm inclined to go with option A and focus on open language because to be quite honest, backwards compatibility is a love hate relationship, whilst we all love running our old stuff, I mean heck I fired up ePSXe and played through Crash Bandicoot this weekend and then completed Oddworld: New 'n' Tasty just last night purely for the nostalgia trip, we also hate backwards compatibility, just look at the number of flaws that have been found in the Windows Virtual DOS Machine that gained kernel privileges and were wide open for 3 decades, they can only strip that out of 64-bit Windows, now look at the number of flaws in regular Win32 because of decades of backwards compatibility, then look at the people are screaming for Microsoft to take an Apple approach to backwards compatibility.

Then you have the developers like Mark who just want to add new stuff but then chew on and give themselves a migraine trying to get it to work without breaking stuff. I have no idea if any of this makes a lick of sense, I seem to have transitioned from an opinion to rambling to a rant and then a nostalgia trip and then back to a rant.

TL;DR Mark, save the new stuff for the open language milestone, it'll probably be easier on the noggin than trying to balance not breaking stuff and ending up in Microsoft's position.

God knows I wouldn't have the patience to do it.


Lagi Pittas
VIP Livecode Opensource Backer
VIP Livecode Opensource Backer
Posts: 349
Joined: Mon Jun 10, 2013 1:32 pm

Re: Retaining Backwards Compatibility in a Changing World

Post by Lagi Pittas » Mon Mar 30, 2015 12:13 pm


I very much agree with mickpitkin92 .

but I would go about it in a different way "Nothing is impossible for the man who doesn't have top do it himself" :wink:

First the problem of managing legacy commands/functions etc. - my suggestion would be that a switch back to legacy could be made on a more granular level

Rather than having humungous code base with lots of flags and tests everywhere - keep the 6.x branch as it is but make it access the Abstract Syntax Tree (AST) of the 8.x branch ( I'm assuming the parser works like most standard compilers)

Now in many cases when something will break it's probably in a couple of handlers/functions (don't you just hate generalisations?) so for instance the routines that accessed chars might break if you are using unicode.

If we have the equivalent of UseSystemDate which is in effect a local flag (and maybe we have different versions ie set UseUnicode to false which will probably be more use to desktop business programs)

The way I suggest it is implemented is using a variation of Steve Wozniak's "Sweet 16" interpreter for his integer basic

In effect he would switch the 6502 to a pseudo 16bit chip - in effect a bytecode interpreter before it became de jour way of implementing languages.

With livecode we would switch to the "old legacy: interpreter to interpret that part of the code , and it would switch to the legacy code generator for emitting the "compiled" code.

So in the final standalone there would be no difference - code is code.

In the iDE though there would be a context switch that would run the code from the older code base.

With this scheme we could have the equivalent of python 2.x and python 3.x within the same codebase (taking the two codebases as 1 ).

Over time the "legacy" code could be tied into the NON legacy code i'e the code to emit the tokenizer/scanner would be a single bit (?) of code and data used by both "branches" , or the code emitter of the "legacy branch" would use as much of the "new code" that makes sense.

JUst to be more specific if a routine in the "old codebase" did a binary search on a part of memory then that code is used by both "codebases" so the extra baggage is less but also fewer places where bugs could creep in.
It would also mean that new additions wouldn't break the "old" code i.e . one of the many flag tests was not correctly set.

The idea is that we don't turn the new code into the old spaghetti code with so many tests and routines but also be able to do what the python team does which is keep updating the legacy without needing to backport - the best of both worlds.

So someone who had a system that didn't need any unicode for instance could have a global

Code: Select all

 set UseSystem6 to true
at the top and do a context switch to

Code: Select all

set UseSystem8 to true
where there is something that they need from the new system. (this last bit could be totally hair brained, but it might give food for thought).

Edit: By global I mean that the compiler/interpreter would do the context switch whenever it "knew" that code that is unicode dependant was going to be executed. I am again making the assumption (I haven't looked at the code) that
there is a preprocess of the code before any interpretation or compiling takes place so the "context switching" can be added to the tokenized/compiled output and the switch back made at the end of the handler automatically.

I leave it to better men (or women :wink: ) than me to find the problems with speed here - maybe again the "global setting" is pie in the sky - but the fact that LC runs code from opver 20 years ago is a miracle in itself.

KIndest Regards Lagi

VB6 Programming
Posts: 2
Joined: Fri Apr 03, 2015 11:49 am

Re: Retaining Backwards Compatibility in a Changing World

Post by VB6 Programming » Fri Apr 03, 2015 12:10 pm

Ask any VB developer about VB6 and you'll get an earful even louder than the most die-hard Python 2 fan. :)
Yes Microsoft got it totally wrong with VB6 programming.

And rarely is there any reason to make breaking changes. You can add new features whilst still retaining old ones (and you can mark them as 'deprecated if necessary).

Post Reply

Return to “Engine Contributors”