Lagi, the points you raise about abbreviations are good ones, but the subject is quite complex with more aspects at play than any single study I know of has accounted for.
While it's possible to isolate some metrics and extrapolate from there, doing so in isolation risks one of the core problems with extrapolation: one small error in the beginning can easily expand by the end to a much larger gap between expected and actual outcomes.
One of the benefits of abbreviation is useful for coding but bad for literature, for the very reason some prefer not to use them at all: they are less like English.
For literature, frequent use of abbreviations would of course be counter-productive (perhaps at least in part because in literature an abbrev. requires a period, a symbol we're conditioned to first evaluate as a full-stop), but with code our goals are very different.
Code is distinguished by being purely utilitarian, and ultimately of most explicit benefit to a machine. No matter how we might like to read something, if it's code then at some point it will need to be executed by a machine too stupid to count past 1. Machine constraints define the nature of code as a distinct form of writing.
So in code we use symbols, only some of which have any opportunity to even attempt to be English-like at all.
Code: Select all
put "Some Value" into SomeVariable
...works as English and code, this:
Code: Select all
put trunc(tNum) into tArray[tCustomerDiscount]
...would be horrible English, while it might be very good code.
As you noted, we don't write code nearly as often as we read it.
But more importantly, we rarely truly read
code. Far more frequently we skim
it to get the gist of an algo, or scan
it to look for specific elements such as during debugging or looking for opportunities for enhancement.
With both skimming and scanning, visual distinction of a symbol aids identification.
Typing full symbols like "card", "background", and "scrollbar" have indeed likely never been cited on any coroner's report as a cause of death.
But for skimmiing and scanning, being longer contributes to a visually more dense space in which we're trying to identify specific things. And being "normal" words they're less distinct cognitively from the other words around them; a long string of long words is simply more cognitive work to process, and the more familiar the words the less distinct they are.
Ultimately, it's precisely because "cd", "bg", and "sb" aren't "normal" words that they're valuable: they refer to language-specific objects that have unique meaning in LiveCode; it's beneficial that they stand out visually in a dense body of code, every bit as much as using less-readable-as-literature-but-very-productive-as-code naming conventions for variables.
While Jacque is almost always less verbose than I am, I believe these considerations may be at the heart of her reaction to the thought of removing abbreviations. And they may be among the reasons why the original designers of this family of languages took considerable time and thought to include them.
And most simply, they've been around long enough in LiveCode that there's no practical way to remove them.
This leaves us in a pretty good place, as we find outselves when faced with so many of life's choices: those who like them can use them, and those who don't can use something else.
Vive le difference.
PS: Enjoy more bash. Not only does it offer tremendous value for system automation, it allows us to explore many language design choices quite different from those in LiveCode, and some are quite nice. Brevity is one of most famous distinctions of bash, but there are others as well (odd as it was when getting started I've become fond of "FI" <g>).