Slow... file save

Deploying to Windows? Utilizing VB Script execution? This is the place to ask Windows-specific questions.

Moderators: FourthWorld, heatherlaine, Klaus, kevinmiller, robinmiller

Livecode Staff Member
Livecode Staff Member
Posts: 1176
Joined: Thu Apr 11, 2013 11:27 am

Re: Slow... file save

Post by LCMark » Thu Apr 20, 2023 9:23 am

Thanks for the clarification Mark - I didn't realise I could run BEGIN and COMMIT as individual revExecuteSQL commands with other individual revExecuteSQL commands inside such a block - I had assumed this all had to be inside a single block of SQL (live and learn!).
Yes - so the db-connection you make and are operating on has its own 'context'. That context not only consists of any options which you can configure through execute sql, but also a 'current transaction'.
Would this be expected to be slower than what you recommend? (as mentioned above this is about 40% faster, but the actual net gain on my Mac is < 50 ms) - out of curiosity, I'll check what you recommend, for my own learning...
The overall operation you are doing here essentially splits up into several separate 'blocks of work' (whether interleaved or not):

1. LiveCode Script execution overhead: the repeat loops, construction of strings, manipulation of data into the form required by the call to 'execute sql'

2. RevDB's processing of the 'execute sql' query before passing it to sqlite

3. SQLite's parsing, checking and compilation of the query to bytecode

4. SQLite's execution of the query

5. SQLite's integration of the result of the query into the database on disk

Here (1-3) are what could probably be called the 'cost of overhead' of expressing the operation (but not actually performing it) - this is almost certainly going to be a tiny fraction of the overall execution time and which method (large single insert, or many smaller inserts) will take less time will almost certainly depend on the number of records and amount of data per record.

It is more than likely that the 'single large insert' operation will always perform less actual CPU instructions to be executed *however* it will also require much more memory (i.e. length of string of the query + amount of bytecode sqlite needs to execute it) - so there will likely be a point (in terms of number of records inserted) where individual inserts will take less time than one single insert query (of course, how many records this occurs at is something which would have to be tested - it would likely have to be 10,000s if not 100,000s).

[ If I were doing this, I would go with the single-insert per record approach because the code is likely to be more maintainable *and* it is completely scalable - the single-large-query approach will break at the point the size of the whole query is sufficiently large - although, there is of course a balance point - inserting N records in a single query at a time - but still all inserts being in one outer transaction ].

Now (4) is the actual 'main work' of the operation - it is where SQLite will actually insert the new records into the internal data structure. SQLite uses a memory caching scheme (where 'pages' in the underlying file are cached and modified in memory rather than the disk being directly touched). This will be proportional to the number of records, the number of indices on the columns touched by those records, and the amount of data in the records.

Step (5) is where sqlite has to ensure those in-memory pages (which have been modified) have been not only flushed to disk but also that the OS *has* actually flushed them to disk (the joy of write caches on media!). The time taken here will be proportional to the number (and size) of the pages which need to be flushed.

As writing to memory is *much* faster than writing to disk I'd expect (4) to be a reasonable chunk of the time the operation takes, but (5) being the lion's share (although that does depend on OS, target device and a whole host of other things - and I shall avoid going into a side rant about the relative merits of UNIXy vs MSy type design 'choices' and such related to system calls, filesystem and memory management ;) ).

So to answer your question - for smaller numbers of records - the single-large-insert approach will almost certainly be faster but only by a fraction, and at some point (in terms of number of records) it will either fail (due to insufficient memory), or start being slower and eventually substantially slower (due to excessive virtual memory paging to get the memory needed to do it).

Hope this helps :)

Livecode Staff Member
Livecode Staff Member
Posts: 1176
Joined: Thu Apr 11, 2013 11:27 am

Re: Slow... file save

Post by LCMark » Thu Apr 20, 2023 9:26 am

stam wrote:
Thu Apr 20, 2023 8:57 am
To answer my own question, there is a very slight speed advantages, ie.:

Code: Select all

revExecuteSQL connectionID, "BEGIN TRANSACTION;"
repeat for each record
   revExecuteSQL connectionID, SQL, "<array>"
end repeat
revExecuteSQL connectionID, "COMMIT;"
is about 5 ms faster on average of 10 writes: 14.1 ms (min 10 ms, max 17 ms)
(seems a bit counterintuitive to me, but what do I know ;) )
I'll test both variants on the my work laptop and hopefully there may be some improvement!
Haha - okay so that basically means that (in the above) - the cost of overhead of the single-large-insert is actually larger than individual inserts in all cases. (It will all be related to memory usage at each of the three stages - LiveCode Script, revDB and SQLite parsing/compilation I expect).

Posts: 1999
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Thu Apr 20, 2023 6:56 pm

In way of update, running individual SQL INSERTs as above led to a dramatic reduction in timings, now down to ~120 ms on my work machine.
The write issue with LC persists sadly - and building a standalone takes 3 times as long as saving the stack file.
This must be something particular to my work laptop and probably can't be fixed... but the SQL issue is certainly greatly improved. Learn something every day :)

Post Reply

Return to “Windows”