Slow... file save

Deploying to Windows? Utilizing VB Script execution? This is the place to ask Windows-specific questions.

Moderators: FourthWorld, heatherlaine, Klaus, kevinmiller, robinmiller

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Slow... file save

Post by stam » Tue Apr 18, 2023 1:14 pm

Hi all,

I'm working on an app for our department with SQLite database.
Works perfectly and super fast on my Mac, but that is not the case on my work Win10 machine - writing the contents of an array to SQLite takes less than a second on my Mac but round about 2 minutes on Win10 (ok it's no great machine but intel i5 should be able to do better).

So with some bending of security I managed to get livecode installed on my work machine to investigate.
However even just saving the LC stack file takes about 5 minutes! So maybe nothing to do with my code!

Has anyone encountered issues like this before on Win? (i.e. stack file taking > 3 minutes to save?)
Any suggestions?

Thanks in advance,
Stam

SparkOut
Posts: 2852
Joined: Sun Sep 23, 2007 4:58 pm

Re: Slow... file save

Post by SparkOut » Tue Apr 18, 2023 5:01 pm

Hi Stam, this "sounds like" going back to the bad old days of poor performance on Windows, but those issues should be mainly solved though. I'd suspect security/antimalware controls that are checking file access in real time first. Can you get the SQLite file and/or the app whitelisted?

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Wed Apr 19, 2023 8:08 am

Thanks Sparkout,

Nice idea but our IT is a monolithic Godzilla where it’s the Microsoft way or no way, individuals working in the costumer-facing roles are largely ignorant but also arrogant, and have a top-down policy insisting only they can manage any kind of data interaction. Guaranteed they won’t do this sadly.

However the issue is not only with the app and the SQLite file; the LiveCode IDE as well. The only issue for all of these is write; read is fine, run is slightly slow (presumably antivirus kicking in), but nowhere near the slowness of write. But neither of these issues have affected me before, so don’t know how to explain this…

I’ll see if the issue persists with v10 DP5, although as they block browser download of .exe binaries it’s just more of a pain :-/

Thank you tho!
S.

LCMark
Livecode Staff Member
Livecode Staff Member
Posts: 1209
Joined: Thu Apr 11, 2013 11:27 am

Re: Slow... file save

Post by LCMark » Wed Apr 19, 2023 10:59 am

@stam: So what you are experiencing is quite strange - we have done a number of fixes on windows over the last couple of years which has vastly improved performance of saving stackfiles, encoding arrays and such.

Does the slow save performance you are seeing when writing data to disk depend on where you are writing to? e.g. Does your work machine have any local storage, or is it all network drives?

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Wed Apr 19, 2023 12:41 pm

LCMark wrote:
Wed Apr 19, 2023 10:59 am
@stam: So what you are experiencing is quite strange - we have done a number of fixes on windows over the last couple of years which has vastly improved performance of saving stackfiles, encoding arrays and such.

Does the slow save performance you are seeing when writing data to disk depend on where you are writing to? e.g. Does your work machine have any local storage, or is it all network drives?
Hi Mark - this is on a work laptop. It does have local storage although on a roaming profile so I get the the same desktop/documents no matter which machine I use to log in. This isn’t a network share as is available offline and syncs my files on connection.

I guess it might be something with my code - I can’t post now as am at work but will do so later. And perhaps more importantly, it’s not something I’ve seen with any other app on the same machine.

I’ll post my code later today…
Many thanks
Stam

FourthWorld
VIP Livecode Opensource Backer
VIP Livecode Opensource Backer
Posts: 9842
Joined: Sat Apr 08, 2006 7:05 am
Location: Los Angeles
Contact:

Re: Slow... file save

Post by FourthWorld » Wed Apr 19, 2023 4:16 pm

Semi-random diagnostic test to rule out some things:

Does the slow performance persist if you turn off Windows Defender?

(Be sure to turn it back on again after the test.)
Richard Gaskin
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Wed Apr 19, 2023 7:55 pm

FourthWorld wrote:
Wed Apr 19, 2023 4:16 pm
Does the slow performance persist if you turn off Windows Defender?
Sadly access to any antivirus/antimalware, including MS Defender, is locked out to us mere users... no way I can test this without an 'admin' logging on remotely to do this, and it's equally sad that this will never happen. Any 'friendly' admins/business relationship managers have been long since removed in favour of centralising IT services to a pool of every-changing semi-knowledgable IT staff who generally last 3-6 months on the job and then are replaced by the cheapest staff they can find. Apparently the bosses atop their ivory tower don't want 'shadow IT fiefdoms' and insist on centralising this way which of course is just a recipe for ongoing problems. Hey ho...

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Wed Apr 19, 2023 8:15 pm

Further to this, my code to write to an SQLite database is below. This is excruciatingly slow on my Windows work laptop (Core i5), taking several minutes, but less than a second on my Mac (admittedly an M2 Air with 24 Gb - pretty fast machine).

The command takes a numerically indexed array (eg the dgData of a datagrid) and commits each record to an sqlite database whose column names exactly match the keys of each subarray to the numerical index.
In the code below there are handlers to open and store the connection id for the database in a script local and a handler to close all open database connections (I've not included these, so you'd need your own if testing this code). The connection id is in the sConnID script local variable and you'd need at least a connectDB handler.

On my Mac this writes to sqlite quite fast (276 records with 29 columns each completes entirely in < 200 milliseconds) - the same data set on windows takes upwards of 3 minutes and saving the stack file upwards of 5 minutes, which of course is unrelated to my code. Even just saving an empty stack takes ages, but not as long as my app.

The only way I can think of to speed up this algorithm is not to use a parameter array and instead use the actual values so that a large SQL statement is constructed and executed only once, instead of each record individually. Not sure how much that will help tho...

Code: Select all

command writeArrayToSQLite pTableName, pArray 
    # add elements of a numerically keyed array to an sqlite table pTableName, whose subkeys match DB fields
    local tHeaders, tKeys, tParamList, tSQL, x, tParamA, tDebug
    connectDB 
    // build column names and parameter list from array
    put replaceText(the keys of pArray[1], "\R", comma) into tKeys
    repeat with i = 1 to the number of items of tKeys
        put quote & item i of tKeys & quote into item i of tHeaders
        put ":" & i & comma after tParamList // ie :1,:2,:3,...,:n
    end repeat
    delete the last char of tParamList
    put "INSERT into" && pTableName && "(" & tHeaders & ") values (" & tParamList & ")" into tSQL
    # build parameter array to pass values to insert
    repeat for each element tElement in pArray      
        put empty into tParamA 
        put 0 into x
        repeat for each item tItem in tKeys
            add 1 to x
            put tElement[tItem] into tParamA[x]
        end repeat
        # execute insert for tElement of tArray
        revExecuteSQL sConnID, tSQL , "tParamA" 
        if the result is not a number then put the result & return after tDebug
    end repeat
    closeDB
    return tDebug // if the result is empty then no SQL error
end writeArrayToSQLite

SparkOut
Posts: 2852
Joined: Sun Sep 23, 2007 4:58 pm

Re: Slow... file save

Post by SparkOut » Wed Apr 19, 2023 8:50 pm

I haven't properly studied the code, but have you tried making the SQL updates in a BEGIN ... COMMIT transaction block?

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Wed Apr 19, 2023 9:08 pm

SparkOut wrote:
Wed Apr 19, 2023 8:50 pm
I haven't properly studied the code, but have you tried making the SQL updates in a BEGIN ... COMMIT transaction block?
For disambiguation the code above creates an sqlite command for each record and inserts it into the sqlite db using a parameter array rather than values, ie:

Code: Select all

INSERT INTO <tableName> (field1, field2, field3,..., field 1) VALUES (:1,:2,:3,...,:n) into tSQL
revExecuteSQL connectionID, tSQL , "tParamA" 
tParamA is an array of values for each record, where tParamA[1] = value for :1, tParamA[2] = value for :2 etc.
This requires a nested loop, which his not the fastest, but it's not bad because these are for each loops which are pretty fast.

Not sure a BEGIN/COMMIT block will help as a single sqlite statement can be used to insert multiple records, so my next step is to test the time difference with a handler that creates a large sqlite statement that inserts all records without a parameter array in one large write rather than many smaller ones.

I'll report back but not sure in will actually make a difference on my windows work machine as even just saving the stack takes several minutes, implying there is a 'write' problem (possibly due to the various antivirus/antimalware software running on all our work machines without exception). It may end up being a case of putting a spinner or progress bar on so the user doesn't think it's crashed and force-quits the app halfway through the write...
Last edited by stam on Wed Apr 19, 2023 11:53 pm, edited 1 time in total.

LCMark
Livecode Staff Member
Livecode Staff Member
Posts: 1209
Joined: Thu Apr 11, 2013 11:27 am

Re: Slow... file save

Post by LCMark » Wed Apr 19, 2023 10:18 pm

@stam: BEGIN and COMMIT around your current individual insert approach should make a difference - outside a transaction each insert will be it’s own transaction and due to SQLite’s acid compliance each transaction has to ensure that the db (on disk) is consistent (which usually also involves a sync operation to flush any os memory buffering of written file data). Currently each page which adding a new record touches (sqlite dbs internal format is very much like paged memory) will be written on each insert. If they are all in one transaction the page set will only be written once on commit. (Chances are each record is small so each page will be written many times currently - as they fill up).

(Obviously the stackfile saving problem is separate - but one thing at a time!)

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Thu Apr 20, 2023 3:48 am

OK, so have now benchmarked the different approaches with a smaller number of records (60 records, 41 columns) by averaging 10 writes each.

My initial method averages 38.9 ms (min 36 ms, max 41 ms)

Prefixing the individual insert statements with BEGIN TRANSACTION/COMMIT did't appreciably change this: Average 38.1 ms, (min 36 ms, max 41 ms)

Converting this into a single large statement did make a dent in this with an average of 19.3 ms (min 13 ms, max 23 ms), so may be worth doing although whether this improves performance in the app on my windows work laptop remains to be seen... certainly the 20 ms difference on my Mac doesn't seem worth it, but perhaps writing just once speeds up the process.

Still some work to do on full text method, as the I had to strip it of commas as it was interfering with the loop building the value lists for the insert.

To follow Mark's suggestion (putting each individual INSERT inside a BEGIN/COMMIT block) , I modified my initial method:

Code: Select all

command writeArrayToSQLite2 pTableName, pArray 
    local t1 -- BENCHMARK
    put the milliseconds into t1 -- BENCHMARK
    # add elements of a numerically keyed array to an sqlite table pTableName, whose subkeys match DB fields
    local tHeaders, tKeys, tParamList, tSQL, x, tParamA, tDebug
    connectDB 
    // build column names and parameter list from array
    put replaceText(the keys of pArray[1], "\R", comma) into tKeys
    repeat with i = 1 to the number of items of tKeys
        put quote & item i of tKeys & quote into item i of tHeaders
        put ":" & i & comma after tParamList // ie :1,:2,:3,...,:n
    end repeat
    delete the last char of tParamList
    put "BEGIN TRANSACTION;" & return into tSQL
    put "INSERT into" && pTableName && "(" & tHeaders & ") values (" & tParamList & ");" after tSQL
    put return & "COMMIT;" after tSQL
    # build parameter array to pass values to insert
    repeat for each element tElement in pArray      
        put empty into tParamA 
        put 0 into x
        repeat for each item tItem in tKeys
            add 1 to x
            put tElement[tItem] into tParamA[x]
        end repeat
        # execute insert for tElement of tArray
        revExecuteSQL sConnID, tSQL , "tParamA" 
        if the result is not a number then put the result & return after tDebug
    end repeat
    closeDB
    put tDebug & return & the milliseconds - t1 -- DEBUG / BENCHMARK
    return tDebug
end writeArrayToSQLite2
To change this to a single large insert statement for multiple records (only tested within a BEGIN/COMMIT block):

Code: Select all

ccommand writeArrayToSQLite3 pTableName, pArray 
    local t1 -- BENCHMARK
    put the milliseconds into t1 -- BENCHMARK
    # add elements of a numerically keyed array to an sqlite table pTableName, whose subkeys match DB fields
    local tHeaders, tKeys, tParamList, tSQL, x, tParamA, tDebug,
    connectDB 
    // build column names and parameter list from array
    put replaceText(the keys of pArray[1], "\R", comma) into tKeys
    repeat with i = 1 to the number of items of tKeys
        put quote & item i of tKeys & quote into item i of tHeaders
    end repeat
    
    put "BEGIN TRANSACTION;" & return into tSQL
    put "INSERT INTO" && pTableName && "(" & tHeaders & ") VALUES " & return after tSQL
   
    # data for each row
    repeat for each element tElement in pArray      
        put empty into tParamList
        put 0 into x
        repeat for each item tItem in tKeys
            add 1 to x
            put quote & replaceText(tElement[tItem], ",", space) & quote into item x of tParamList
        end repeat
        put "(" & tParamList & ")," & return after tSQL
    end repeat
    delete  char -2 to -1 of tSQL
    put ";" & return & "COMMIT;" after tSQL
    
    revExecuteSQL sConnID, tSQL
    
    if the result is not a number then put the result & return after tDebug
    closeDB
    put tDebug & return & the milliseconds - t1 -- DEBUG / BENCHMARK
    return tDebug
end writeArrayToSQLite3
Will benchmark the build app and feed back if any difference...
S.

LCMark
Livecode Staff Member
Livecode Staff Member
Posts: 1209
Joined: Thu Apr 11, 2013 11:27 am

Re: Slow... file save

Post by LCMark » Thu Apr 20, 2023 6:04 am

To follow Mark's suggestion (putting each individual INSERT inside a BEGIN/COMMIT block)
Sorry @stam - what I said wasn't very clear (to be fair @SparkOut did say it better) - you should put a BEGIN before you start issuing the INSERTs and then a COMMIT after the last one - i.e.

Code: Select all

      exec BEGIN
      repeat for each record
          exec INSERT record
      end repeat
      exec COMMIT
Putting BEGIN/COMMIT around each individual INSERT won't make any difference (beyond adding a tiny amount of overhead in making those two extra queries per insert).

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Thu Apr 20, 2023 8:34 am

LCMark wrote:
Thu Apr 20, 2023 6:04 am
To follow Mark's suggestion (putting each individual INSERT inside a BEGIN/COMMIT block)
Sorry @stam - what I said wasn't very clear (to be fair @SparkOut did say it better) - you should put a BEGIN before you start issuing the INSERTs and then a COMMIT after the last one - i.e.

Code: Select all

      exec BEGIN
      repeat for each record
          exec INSERT record
      end repeat
      exec COMMIT
Putting BEGIN/COMMIT around each individual INSERT won't make any difference (beyond adding a tiny amount of overhead in making those two extra queries per insert).
Thanks for the clarification Mark - I didn't realise I could run BEGIN and COMMIT as individual revExecuteSQL commands with other individual revExecuteSQL commands inside such a block - I had assumed this all had to be inside a single block of SQL (live and learn!).

Nevertheless, what I did do is build a block of SQL text in the format below and run this in a single revExecuteSQL:

Code: Select all

BEGIN TRANSACTION;
INSERT INTO <table> (field1, field2, ..., fieldN) VALUES
(value1-record1, value2-record1,..., valueN-record1),
(value1-record2, value2-record2,..., valueN-record2),
...
(value1-recordN, value2-recordN,..., valueN-recordN);
COMMIT;
Would this be expected to be slower than what you recommend? (as mentioned above this is about 40% faster, but the actual net gain on my Mac is < 50 ms) - out of curiosity, I'll check what you recommend, for my own learning...

Many thanks once again,
Stam

stam
Posts: 2686
Joined: Sun Jun 04, 2006 9:39 pm
Location: London, UK

Re: Slow... file save

Post by stam » Thu Apr 20, 2023 8:57 am

To answer my own question, there is a very slight speed advantages, ie.:

Code: Select all

revExecuteSQL connectionID, "BEGIN TRANSACTION;"
repeat for each record
   revExecuteSQL connectionID, SQL, "<array>"
end repeat
revExecuteSQL connectionID, "COMMIT;"
is about 5 ms faster on average of 10 writes: 14.1 ms (min 10 ms, max 17 ms)
(seems a bit counterintuitive to me, but what do I know ;) )
I'll test both variants on the my work laptop and hopefully there may be some improvement!

Many thanks once again,
Stam

Post Reply

Return to “Windows”