Yes - so the db-connection you make and are operating on has its own 'context'. That context not only consists of any options which you can configure through execute sql, but also a 'current transaction'.Thanks for the clarification Mark - I didn't realise I could run BEGIN and COMMIT as individual revExecuteSQL commands with other individual revExecuteSQL commands inside such a block - I had assumed this all had to be inside a single block of SQL (live and learn!).
The overall operation you are doing here essentially splits up into several separate 'blocks of work' (whether interleaved or not):Would this be expected to be slower than what you recommend? (as mentioned above this is about 40% faster, but the actual net gain on my Mac is < 50 ms) - out of curiosity, I'll check what you recommend, for my own learning...
1. LiveCode Script execution overhead: the repeat loops, construction of strings, manipulation of data into the form required by the call to 'execute sql'
2. RevDB's processing of the 'execute sql' query before passing it to sqlite
3. SQLite's parsing, checking and compilation of the query to bytecode
4. SQLite's execution of the query
5. SQLite's integration of the result of the query into the database on disk
Here (1-3) are what could probably be called the 'cost of overhead' of expressing the operation (but not actually performing it) - this is almost certainly going to be a tiny fraction of the overall execution time and which method (large single insert, or many smaller inserts) will take less time will almost certainly depend on the number of records and amount of data per record.
It is more than likely that the 'single large insert' operation will always perform less actual CPU instructions to be executed *however* it will also require much more memory (i.e. length of string of the query + amount of bytecode sqlite needs to execute it) - so there will likely be a point (in terms of number of records inserted) where individual inserts will take less time than one single insert query (of course, how many records this occurs at is something which would have to be tested - it would likely have to be 10,000s if not 100,000s).
[ If I were doing this, I would go with the single-insert per record approach because the code is likely to be more maintainable *and* it is completely scalable - the single-large-query approach will break at the point the size of the whole query is sufficiently large - although, there is of course a balance point - inserting N records in a single query at a time - but still all inserts being in one outer transaction ].
Now (4) is the actual 'main work' of the operation - it is where SQLite will actually insert the new records into the internal data structure. SQLite uses a memory caching scheme (where 'pages' in the underlying file are cached and modified in memory rather than the disk being directly touched). This will be proportional to the number of records, the number of indices on the columns touched by those records, and the amount of data in the records.
Step (5) is where sqlite has to ensure those in-memory pages (which have been modified) have been not only flushed to disk but also that the OS *has* actually flushed them to disk (the joy of write caches on media!). The time taken here will be proportional to the number (and size) of the pages which need to be flushed.
As writing to memory is *much* faster than writing to disk I'd expect (4) to be a reasonable chunk of the time the operation takes, but (5) being the lion's share (although that does depend on OS, target device and a whole host of other things - and I shall avoid going into a side rant about the relative merits of UNIXy vs MSy type design 'choices' and such related to system calls, filesystem and memory management ).
So to answer your question - for smaller numbers of records - the single-large-insert approach will almost certainly be faster but only by a fraction, and at some point (in terms of number of records) it will either fail (due to insufficient memory), or start being slower and eventually substantially slower (due to excessive virtual memory paging to get the memory needed to do it).
Hope this helps