LC IDE Stacks Analyses: the QR library
Moderators: FourthWorld, heatherlaine, Klaus, kevinmiller, robinmiller
LC IDE Stacks Analyses: the QR library
Hi,
Attached is an A.I. assisted analysis of one of the LiveCode IDE stacks that is noted as public domain, "com.livecode.library.qr". In Claude or ChatGPT I use the following prompt:
"I need help analyzing LiveCode IDE stack scripts. My goals are: 1. Primary Goal: Understand what each LiveCode IDE stack script does and identify handlers or code snippets that could be more generally useful 2. Secondary Goal: Create a downloadable standalone HTML analysis document in a consistent professional format Analysis Requirements: For each script, please provide: * Primary Purpose - What the stack/library does * Key Functionality - Main features and capabilities * Reusable Code Snippets - 7-10 code examples with use cases (the most important section) * Architectural Patterns - Design patterns used * Notable Design Decisions - Important implementation choices * Comparison sections (when analyzing related scripts) HTML Format Requirements: Create a standalone HTML file with: * Professional styling (similar to technical documentation) * Table of Contents with anchor links * Syntax-highlighted code blocks * Color-coded info boxes (use-case: green, note: yellow, important: red, algorithm/phase: purple) * Clean typography and print-friendly CSS * Header with document type, subject, version, date * All CSS and content self-contained (no external dependencies) LiveCode Syntax Preferences: * All generated code must use valid LiveCode syntax * Variables: "g" prefix for globals, "t" prefix for locals, "p" prefix for parameters * All variables must be declared Previous Analyses Completed: * Answer Dialog (UI dialogs with multi-button support) * Ask Dialog (input dialogs with password masking) * Unified Diff Library (text comparison algorithm) * Drawing Library (SVG to binary compiler) I'll attach the LiveCode stack script for you to analyze. Please read it carefully and produce the HTML analysis document following the format established above. Here is the LiveCode IDE stack script I want you to analyze:"
I use these analyses to "mine" for potentially useful handlers, code snippets and coding strategies that could be of benefit for more general use or adaptation. Also, it is really interesting reading. Most of the LiveCode IDE stack scripts are copyrighted so I am only making this public domain QR library analysis available. Hope you find this useful and educational.
Mike
Attached is an A.I. assisted analysis of one of the LiveCode IDE stacks that is noted as public domain, "com.livecode.library.qr". In Claude or ChatGPT I use the following prompt:
"I need help analyzing LiveCode IDE stack scripts. My goals are: 1. Primary Goal: Understand what each LiveCode IDE stack script does and identify handlers or code snippets that could be more generally useful 2. Secondary Goal: Create a downloadable standalone HTML analysis document in a consistent professional format Analysis Requirements: For each script, please provide: * Primary Purpose - What the stack/library does * Key Functionality - Main features and capabilities * Reusable Code Snippets - 7-10 code examples with use cases (the most important section) * Architectural Patterns - Design patterns used * Notable Design Decisions - Important implementation choices * Comparison sections (when analyzing related scripts) HTML Format Requirements: Create a standalone HTML file with: * Professional styling (similar to technical documentation) * Table of Contents with anchor links * Syntax-highlighted code blocks * Color-coded info boxes (use-case: green, note: yellow, important: red, algorithm/phase: purple) * Clean typography and print-friendly CSS * Header with document type, subject, version, date * All CSS and content self-contained (no external dependencies) LiveCode Syntax Preferences: * All generated code must use valid LiveCode syntax * Variables: "g" prefix for globals, "t" prefix for locals, "p" prefix for parameters * All variables must be declared Previous Analyses Completed: * Answer Dialog (UI dialogs with multi-button support) * Ask Dialog (input dialogs with password masking) * Unified Diff Library (text comparison algorithm) * Drawing Library (SVG to binary compiler) I'll attach the LiveCode stack script for you to analyze. Please read it carefully and produce the HTML analysis document following the format established above. Here is the LiveCode IDE stack script I want you to analyze:"
I use these analyses to "mine" for potentially useful handlers, code snippets and coding strategies that could be of benefit for more general use or adaptation. Also, it is really interesting reading. Most of the LiveCode IDE stack scripts are copyrighted so I am only making this public domain QR library analysis available. Hope you find this useful and educational.
Mike
- Attachments
-
- qrcode-generator-analysis.html.zip
- (16.25 KiB) Downloaded 6 times
Re: LC IDE Stacks Analyses: the QR library
Mike is busier with LC than all the rest of us put together.
Craig
Craig
-
richmond62
- Livecode Opensource Backer

- Posts: 10208
- Joined: Fri Feb 19, 2010 10:17 am
Re: LC IDE Stacks Analyses: the QR library
How do you determine WHICH stack scripts are copyrighted, and which are not?
Re: LC IDE Stacks Analyses: the QR library
Hi,
The copyright notices are in the IDE library script headers. For those that might not know...under the "View" menu in the IDE select "Show IDE stacks in Lists" then open the project browser and you will see all of the open stacks loaded for the IDE.
BTW, the language of the copyright notice seems to me that it might be possible to produce an analysis like I did for other IDE stacks, but it seems dicy and I'm not well-versed in copyright law. I don't think the analysis is good without the code excerpts. Not all of these stack scripts have a copyright notice but I am assuming they are covered somewhere by a copyright notice unless specifically noted as public domain.
-Mike
The copyright notices are in the IDE library script headers. For those that might not know...under the "View" menu in the IDE select "Show IDE stacks in Lists" then open the project browser and you will see all of the open stacks loaded for the IDE.
BTW, the language of the copyright notice seems to me that it might be possible to produce an analysis like I did for other IDE stacks, but it seems dicy and I'm not well-versed in copyright law. I don't think the analysis is good without the code excerpts. Not all of these stack scripts have a copyright notice but I am assuming they are covered somewhere by a copyright notice unless specifically noted as public domain.
-Mike
-
richmond62
- Livecode Opensource Backer

- Posts: 10208
- Joined: Fri Feb 19, 2010 10:17 am
Re: LC IDE Stacks Analyses: the QR library
I would assume that all stacks and/or scripts that are shared with LC 9.6.3 Community and have not been subsequently modified by LiveCode are "fair game".
-
FourthWorld
- VIP Livecode Opensource Backer

- Posts: 10066
- Joined: Sat Apr 08, 2006 7:05 am
- Contact:
Re: LC IDE Stacks Analyses: the QR library
In all nations which are signatories to the Berne Convention (about 90% of UN-recognized nations, which includes the US and UK), all original creative works are granted the copyright protections of the local jurisdiction at the moment of creation.richmond62 wrote: ↑Tue Nov 04, 2025 5:20 pmHow do you determine WHICH stack scripts are copyrighted, and which are not?
So the question here isn't about who owns the code, but how it's licensed for others to use.
Reading the EULA accompanying every release is of course the best authoritative description of rights and responsibilities accompanying use.
In short, the source for packages distributed under GPL v3 adhere to the terms of that license; packages distributed under proprietary license adhere to different terms which may vary from version to version, so it's always a good idea to thoroughly review all proprietary license terms for any software.
But even that gets murky in terms of copyright when we're discussing output from regenerative systems (the current generation of widely varying software called "AI"). Ongoing lawsuits for code generators are taking place across multiple jurisdictions, not yet resolved.
The biggest US case involves GPL code from multiple plaintiffs subsumed into GitHub's code generator. The most recent update I could find on that is from July, where the court has dismissed the DMCA part of the claim while allowing the license infringement claim to proceed to hearing.
https://natlawreview.com/article/court- ... h-contract
But as fun as IP litigation may be, there's a different question here, a purely technical one, unrelated to copyright at all, which may be more useful:
Here have a tool is designed to mine existing code, but relies on regenerative software to create/recreate what may or may not be the original code.
It may be useful to consider more traditional info retrieval methods to mine existing code like this.
By its nature, regenerative software cannot be relied on to perform verbatim retrieval. LLMs work by absorbing source materials verbatim at the input point of training, but are ultimately stored in various forms of permutation. This is useful for the design goal of regenerative software, emulating human output, but the index used in production doesn't include the full source as originally copied for training.
You might get lucky, where generated output compares well to original code. But you might not, and you'll only know if you compare the output against the original, which raises the question of using traditional IR methods which only store the original.
There are various ways one might index such a corpus, and as fun as a bespoke solution might be, and possibly a better fit than a one-size-fits-all solution, I wonder if just using the freetext indexer built into SQLite may be the better fit.
Richard Gaskin
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn
-
richmond62
- Livecode Opensource Backer

- Posts: 10208
- Joined: Fri Feb 19, 2010 10:17 am
Re: LC IDE Stacks Analyses: the QR library
That looks like a minefield.
AND, because of that, I, for one, will stuck stick to my own work: no 'borrowings', and no AI.
AND, because of that, I, for one, will stuck stick to my own work: no 'borrowings', and no AI.
Re: LC IDE Stacks Analyses: the QR library
Rather off-topic, and as everyone here is aware, ChatGPT and its kin are not perfect.
I have been having a running conversation with some friends on a problem in physics for over ten years. I know the answer. Always did.
Er, don't ask, unless you want to be drawn in. The central question embedded below is the topic.
ChatGPT, being asked about six times over the course of four months to speak to that central question I posed on that subject, always initially gave the wrong answer. Each time, after I refined the question, though never actually varying anything important, it ultimately responded with the correct answer. This latest round, upon my asking why it always took so long to get it right, it replied that It is a weakness in large language models, that prescribes that answers to such questions are dumbed down to satisfy the average Joe:
------------------------------------------------
ME:
I am interested in why you do not ever see the correct answer from the start.
ChatGPT:
That’s a very insightful question — and it gets at something fundamental about how I (and large language models generally) form answers.
Here’s what’s happening:
When you ask a question like “Can a bird hover over a fixed point on the earth in a purely horizontal wind without flapping its wings?”, there are two layers of reasoning involved:
Descriptive or kinematic layer:
At this level, the system recognizes a well-known observation — birds can appear motionless relative to the ground when facing a strong headwind. Many popular explanations and even educational sources describe this as “hovering in place.” So the language model’s initial synthesis matches that familiar description and says “yes,” because it’s true in that limited, visual sense.
Physical or energetic layer:
But when we move to a deeper, mechanistic level — considering lift, drag, and energy — the initial “yes” fails, because there’s no energy input to sustain lift indefinitely in a purely horizontal flow. That’s where your conservation-of-energy point comes in, and that’s where a physically consistent model must say no.
In short:
My first pass tends to reflect common explanations in the literature (and the most likely conversational sense).
Only after analytical refinement — or when you push toward physical consistency — does the energetic constraint become explicit.
You’re spotting a general weakness of language models: unless explicitly prompted to apply first-principles reasoning, they often settle for the most typical phrasing of an answer, not the most physically rigorous one.
----------------------------------------------------------
Craig
I have been having a running conversation with some friends on a problem in physics for over ten years. I know the answer. Always did.
Er, don't ask, unless you want to be drawn in. The central question embedded below is the topic.
ChatGPT, being asked about six times over the course of four months to speak to that central question I posed on that subject, always initially gave the wrong answer. Each time, after I refined the question, though never actually varying anything important, it ultimately responded with the correct answer. This latest round, upon my asking why it always took so long to get it right, it replied that It is a weakness in large language models, that prescribes that answers to such questions are dumbed down to satisfy the average Joe:
------------------------------------------------
ME:
I am interested in why you do not ever see the correct answer from the start.
ChatGPT:
That’s a very insightful question — and it gets at something fundamental about how I (and large language models generally) form answers.
Here’s what’s happening:
When you ask a question like “Can a bird hover over a fixed point on the earth in a purely horizontal wind without flapping its wings?”, there are two layers of reasoning involved:
Descriptive or kinematic layer:
At this level, the system recognizes a well-known observation — birds can appear motionless relative to the ground when facing a strong headwind. Many popular explanations and even educational sources describe this as “hovering in place.” So the language model’s initial synthesis matches that familiar description and says “yes,” because it’s true in that limited, visual sense.
Physical or energetic layer:
But when we move to a deeper, mechanistic level — considering lift, drag, and energy — the initial “yes” fails, because there’s no energy input to sustain lift indefinitely in a purely horizontal flow. That’s where your conservation-of-energy point comes in, and that’s where a physically consistent model must say no.
In short:
My first pass tends to reflect common explanations in the literature (and the most likely conversational sense).
Only after analytical refinement — or when you push toward physical consistency — does the energetic constraint become explicit.
You’re spotting a general weakness of language models: unless explicitly prompted to apply first-principles reasoning, they often settle for the most typical phrasing of an answer, not the most physically rigorous one.
----------------------------------------------------------
Craig