Lesson 6 - Web Services: pre-lesson research

Moderators: FourthWorld, heatherlaine, Klaus, kevinmiller, robinmiller

Locked
EOTR
Posts: 49
Joined: Fri Aug 09, 2013 7:20 pm

Lesson 6 - Web Services: pre-lesson research

Post by EOTR » Thu Sep 05, 2013 4:25 pm

Next week's lesson on 9/10 is to cover "This lesson will demonstrate how to use XML, JSon and RSS data and display it in LiveCode."

Here's a few links I found about the technologies:
http://www.xmlnews.org/docs/xml-basics.html
http://www.copterlabs.com/blog/json-wha ... to-use-it/
http://rss.softwaregarden.com/aboutrss.html

I've never actually used JSon. Does anybody have any other references they like?

thanks, Henry

FourthWorld
VIP Livecode Opensource Backer
VIP Livecode Opensource Backer
Posts: 9824
Joined: Sat Apr 08, 2006 7:05 am
Location: Los Angeles
Contact:

Re: Lesson 6 - Web Services: pre-lesson research

Post by FourthWorld » Thu Sep 05, 2013 4:52 pm

JSON is a wonderfully flexible format, well suited for many data representations and an unusually good fit for hierarchically-structured data.

While less common, the binary variant of JSON, BSON, is used by MongoDB and others to provide a format that's even more machine-friendly for parsing: http://bsonspec.org/

Like XML, the main benefit of JSON is in exchanging data with systems which have little knowledge of one another. By adopting a common format that's reasonably easy to parse, services can exchange data with clients with each having very little knowledge about how the other will handle the data internally. In fact, one could argue that most use cases favoring JSON might benefit from XML almost equally, though JSON tends to be slightly more compact and therefore sometimes a better choice for network transfer.

All that said, if the service you're trading data with is also made with LiveCode, such as using LiveCode Server or a Linux standalone as your CGI, you'll have one more option which may be even more efficient: encoded arrays.

Like JSON data, LiveCode's associative arrays are effectively just name-value pairs, and are well suited to representing hierarchically ordered data.

Like most associative array implementations, LiveCode's arrays are efficient because their internal lookup scheme makes use of specific memory locations. While this makes them very fast to work with, by itself it prevents them from being saved to disk, or transferred as a stream over a network.

So the team at RunRev added a pair of functions to translate arrays to and from their memory-specific form into a form that can be treated like any ordinary binary stream: arrayEncode and arrayDecode.

Taking this further, you can run the results of arrayEncode through LiveCode's built-in gzip compressor for more efficient network transfer:

Code: Select all

put compress(arrayEncode(tArrayVar)) into tSomeVar
If the client software is also made in LiveCode,turning that back into an array is simply:

Code: Select all

try
  put arrayDecode(decompress(tReceivedData)) into tArrayVar
catch tErr
  throw "Data in invalid format"
end try
The error-checking with try/catch is helpful in case something went wrong during transfer, making LiveCode unable to successfully decompress the data.

Encoded arrays use a format unique to LiveCode, so they're only a useful option when you have LiveCode as both the client and server.

But if you do, you may find them unusually efficient and easy to work with, providing many of the benefits of BSON but by making use of the built-in compress command they're even more efficient.
Richard Gaskin
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn

EOTR
Posts: 49
Joined: Fri Aug 09, 2013 7:20 pm

Re: Lesson 6 - Web Services: pre-lesson research

Post by EOTR » Thu Sep 05, 2013 6:19 pm

Thanks for the tip Richard!

So LC has a way of handling associative arrays that is an alternative to using XML/JSON, and you would use arrayEncode/arrayDecode to access the arrays on a database(?) if the server and client were both made out of LC. Correct? How do you organize the arrays, with something like SQLite?

FourthWorld
VIP Livecode Opensource Backer
VIP Livecode Opensource Backer
Posts: 9824
Joined: Sat Apr 08, 2006 7:05 am
Location: Los Angeles
Contact:

Re: Lesson 6 - Web Services: pre-lesson research

Post by FourthWorld » Thu Sep 05, 2013 6:53 pm

The ways to store data are nearly infinite, limited only by the imagination and the needs of the project at hand.

I have one storage subsystem I'm using on a few projects which provides schema-free MongoDB-like ways of working, but unlike MongoDB it runs well within the limits imposed on CGIs, without requiring a dedicated database server. In this system (which I affectionately call DChunk for reasons not worth getting into) each "record" is an encoded array stored as a separate file, with one main index which is simply tab-delimited text. It's so simple it's almost cheating, but for small record sets (<5000 or so) it's very efficient, and could be scaled up to about 10k before there's any noticeable performance degradation. It's even functional at 100k records, but if you have that much data you're probably better off with a dedicated DB engine.

But with any system. once an array has been run through arrayEncode it's just binary data. You could store it in a BLOB field in any SQL DB, or even in MongoDB or CouchDB if you like. Of course doing so will obviate the benefits those systems provide for aggregate operations across the collection like seaches, though you could augment the store with keyed fields to store indexable values from the array on CREATE and UPDATE.

But at that point it may be more efficient to just use SQL as it was designed, with tables and fields and the rest. SQL DBs have many rich features for representing and storing data, and if your collection may grow to any significant size (>5k records) you'll almost always benefit from having a solid DB engine do the heavy lifting for you.

You can still use arrays client-side if you're obtaining a record and your CGI walks through the record to put the data into named array slots. Then again, if you're using revDataFromQuery and get the data back in a tab-delimited format, that's really easy to parse as it is.

SQLite can sometimes be a good choice for even server-side data stores, if the number of concurrent write users is low. A content management system makes a good example, because the data will be read from with each page request, and SQLite can handle almost any number of reads since they don't lock the store, while writes in a CMS are done only by team members adding content so they're much more seldom, making a simple lock file a reasonable solution for queuing writes.

But for more transactional systems, MySQL is a better choice because it's far better at record-level locking, allowing many more concurrent writes. A sales system that has to decrement inventory with each purchase is a good example.

And if you have access to a dedicated server, there's much to be said for the scalability and flexibility of so-called "NoSQL" DBs like CouchDB and MongoDB.

So the bottom line is that there's no bottom line. :) The range of ways we can store data are vast, and choosing the right one will depend on the specifics of a given project.
Richard Gaskin
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn

Locked

Return to “Summer School 2013”