Immediately after uploading the "BlurredVision" stack on April 26 to
http://www.metaworx.net/BlurredVision-COM.rev
I had noticed that the char-byte arrangement in the matrix-convolve scripts did not allow to run the stack in *all* LC versions from LC 4.6.1 to 8.x. I corrected this some hours after uploading, and the correction is included since then.
To illustrate what I corrected, I use the following script snippet from a convolve script as an array function. Bernd Niggemann will detect traits of his preferences for array solutions:
Code: Select all
> put \\
> numtobyte(max(0,min((chartonum(byte (p-8) of treData2) * tA1 +\
> chartonum(byte (p-4) of treData2) * tA2+\
> ..............etc.............................
> chartonum(byte (p+4) of treData6) * tA24+\
> chartonum(byte (p+8) of treData6) * tA25)\
> / tScale+tw,255))) into tRed
The script here contains the sequence "*numtobyte-chartonum-byte*". The wonderful effect of this combination is that it works without throwing errors in all LC versions from 4.6.1 to 8.x. This kind of a convolve script runs 3.54 times slower in LC 8 than in LC 4.6.1 irrespective of the image size and it is also the fastest of the char-byte permutations "*numtochar-chartonum-char*", "*numtobyte-chartonum-byte*", and "*numtobyte-bytetonum-byte*".
Measured with an image 640x480 in LC 8 "*numtobyte-chartonum-byte*" runs about 2 seconds faster than with the combination "*numtobyte-bytetonum-byte*".
"*numtobyte-bytetonum-byte*" produces instant crashes in LC 4.6.1.
"*numtochar-chartonum-char*" throws errors in LC 6 (and before) and crashes the earlier versions of LC 7.x, but is accepted - meaning it does not throw errors or produce crashes - in later LC 7 and all LC 8 versions, but is however tremendously (or rather to be expected because of the general char-byte change) slower in LC 7 and LC 8. The speed difference can only be measured with small image sizes.
Results measured in LC 8 between combinations "*numtobyte-chartonum-byte*" and "*numtochar-chartonum-char*":
- image size 160 x 120 = 13.49 times slower
- image size 320 x 240 = 76.28 times slower
- image size 480 x 360 = 547.03 times slower
It takes about 45 minutes with size 480 x 360 and numtochar-chartonum-char compared to 5 seconds using numtobyte-chartonum-byte (on my Windows 7 computer).
The "same" image with differing sizes was used for the measurements, i.e. the text of an image (2048 x 1536) was imported into the teststack, stored in a CP and is then ready for display in 8 different sizes for testing. This ensures that the average of all sum(RGB) pixels remains the same across all sizes. Brighter pictures take a few milliseconds longer than darker ones.
No progress bars were used in the tests, which however produce only insignificant speed differences, as I have measured them under multiple conditions.
The filter used for the tests was an "unsharp mask 5x5" filter. The same kind of script and the same matrix values were used for all measurements, the only exception being the change from the "numtobyte-chartonum-byte" to the "numtochar-chartonum-char" sequence.
Script structure is also a factor that can result in speed differences. An extreme example for this is the Livecode-lessons blur script which is 28 times slower than the fastest of my blur filters contained in the BlurredVision stack, meaning also that the very same filter effect can be achieved by totally different script structures and speeds. However, when comparing both of these blur scripts in LC 4.6.1 and LC 8 the basic speed difference of 3.5 times between versions remains - because of the different image-processing algorithms in the engines of LC 4.6.1 and LC 8.
Next time I intend to address again the issue of "spatial" filters, where the RGB-values of the individual pixels - other than with matrix-convolve filters - remain unchanged and the pixels are only "relocated" inside the image, an example being the "mirror from right"-script we have already discussed in the contributions to this thread on April 19.
Kind regards,
Wilhelm Sanke