Hi,
liveme wrote: ↑Sun Feb 21, 2021 12:13 am
so one does have to run a search operation to look up this whole table to find out what records will need to be updated or deleted,
I'd do a query on my "t_tablestats" first. This shows me what tables have changed at all.
Thus I don't need to bother about tables that haven't changed. So:
Code: Select all
put "SELECT TName FROM `t_tablestats` WHERE LUPdate > (" & gPrefArr["lastDBUpdate"] & ");" into StrSQL
This gives me a list of all tables that need updating.
(gPrefArr["lastDBUpdate"] => "the seconds" of the last update).
Now I loop through this list:
Code: Select all
repeat for each line L in myTableList
put L into x_Table
put gPrefArr[L].["fields2update_lite"] into x_Flds -- fieldList for x_table
/* id,val1,val2 */
put gPrefArr["SQL_Select4update"] into strSQL -- generic sql to get the data
/* SELECT [[x_Flds]] FROM `[[x_Table]]` WHERE isModified = 1; */
put merge(StrSQL) into StrSQL
/* SELECT id,val1,val2 FROM `myTable` WHERE isModified = 1; */
put doSQL("cache",StrSQL) into myRecords
/* Generic SQL routine that queries in the cache DB & returns a result */
-- [error handling ...]
/* Now all data from our table (that need updating) are in myRecords */
put gPrefArr[L].["tempTableDef"] into myTabDef -- tableDef for x_table
/* `id` INT(8) NOT NULL,`val1` CHAR(64) NULL DEFAULT NULL,`val2` SIGNED NULL DEFAULT '0' */
put makeTempTable("myDB", myRecords, myTabDef) into myTemp
/* This creates a temp table, fills it with myRecords, and returns "[connID],[tmpTblName]" */
-- [error handling ...]
/* We have now a temp table with the data. So we call a stored procedure to process it: */
put gPrefArr[L].["updateProc"] into myProc -- storedProc for x_table
put "Call `" & myProc & "` ('" & item 2 of myTemp & "');" into strSQL
/* Example for the SQL in the proc (returns 1 if success, else 0):
. INSERT INTO `t_table` (`id`,`fld1`,`fld2`) (SELECT `id`,`val1`,`val2` FROM `t_temp`)
. ON DUPLICATE KEY UPDATE `fld1` = VALUES(`val1`), `fld2` = VALUES(`val2`); */
put doSQL_M("myDB",StrSQL,item 1 of myTemp,true) into myResult
/* Same SQL routine as above, only it uses a valid connection in Param3, and, if Param4
is true, closes it */
-- [error handling ...]
/* Assuming all went well we can now "unflag" our cached records: */
put gPrefArr["SQL_RemoveModFlags"] into strSQL -- generic sql to reset "modified" flags
/* UPDATE `[[x_Table]]` SET isModified = 0 WHERE isModified = 1; */
put merge(StrSQL) into StrSQL
/* UPDATE `myTable` SET isModified = 0 WHERE isModified = 1; */
put doSQL("cache",StrSQL) into myRecords
-- [error handling ...]
end repeat
(Pasting this into a button "mouseUp" looks a lot better ...)
This is more or less copied from the original program. You may notice a few strange things:
- I'm using a "gPrefArr" (= global preferences array) a lot. This comes handy because I load it (from my FTP) at each startUp - so I can change a lot of stuff w/o even firing up an IDE.
.
- All my database functions are encapsulated & in a library stack.
- "DoSQL(myDB, StrSQL)" opens a connection to myDB (fetching the credentials from gPrefArr ...), determines what to use for SQL (revdb_execute or revdb_queryList), sends the query to myDB, checks the result, closes the connection and returns the data (or an error description).
- "DoSQL_M(myDB, StrSQL, aConnID, mustCloseConn)" is quite similar - only it doesn't open/close the connection automagically, it checks in aConnID for a connection ID to recycle, and closes only if mustCloseConn is true.
- "makeTempTable(myDB, myRecords, myTableDef)" is more convolute: It connects to myDB, creates a temporary table using myTableDef (dropping it first if it exists yet), inserts the data in myRecords & returns the connection ID & comma & the tempTableName. It doesn't close the connection because that would destroy the temp table. This is done then by a consecutive DoSQL_M().
.
- The real job (updating/ inserting the data on the remote DB) here is done by a stored procedure on the "big iron". This way the whole stuff is quite fast - I only transmit a minimum of data and as few code as possible.
Further, I keep the data cached as small as possible. For instance, a salesman only has its own 200 customer addresses stored instead of the whole 27,475 records from `addresses`. And when updating, only the absolute necessary fields are touched.
As much as possible is left to the database engine (using stored views, functions, procedures, triggers). This way I can utilize the caching mechanisms & the "horsepower" of the remote DB.
liveme wrote: ↑Sun Feb 21, 2021 12:13 am
i guess if you do not have that many record to updat... this would not take very long anyway...plus it would also depends on how many time in a day one expect to sync all its users..
I have "worst case" times for "full sync/ renew" of about 10 - 15 seconds (a salesman with very many small customers, very many saved documents, nasty habits and an exceedingly sluggish ancient tablet). Usually it's done in ~5 - 7 seconds. Used by humanoids with IQ > shoe size (!= sales droids) it's a ~5 seconds renew at startup & occaisonal 3 - 5 seconds syncs/ updates during work.
After trying many ways to do it effectively I'm now at this solution:
My main module (the standAlone, acting as "menu") shows if there's need to sync, and nags occasionally (if there's need). The user is responsible to initiate the syncing itself (preferably when there's time & a good connection).
If the user consequently ignores this (for a quite long time) the program will cease working until a "full sync/ renew" is done.
liveme wrote: ↑Sun Feb 21, 2021 12:13 am
Q : What does "flag" means, and how do you do it, do you use of bolean column for a true/false status mark ?
Yup. 1 = true, 0 = false.
Hope this helps, have fun!