Post
by LCMark » Fri Nov 01, 2013 11:12 am
Well, whilst nothing is being drawn under lock screen, the field does recompute its layout after each change so work is being done. (We are looking into making the field defer its layout operations until needed, which would make the code in question substantially faster).
Recomputing the layout involves iterating over each paragraph, and then over each style-run in the paragraph; and an important thing to bear in mind is that the field stores text as mixed runs of unicode and non-unicode text - thus 'babel' will be a single style run, and 'bąbel' will be three ('b', 'ą', 'bel'). (This is something we've changed in the unicode/refactored branch of the engine we've been working on - paragraphs will either be unicode or not, thus eliminating this source of extra work).
In the current engine (<6.5), on each layout the engine will be measuring the length of each style-run using a very fast (but not typographically accurate) mechanism for non-unicode text, and a slow (but typographically accurate) mechanism for unicode text. This means that in the timings for these engine versions most of the work is being spent in the slow text APIs for the unicode text.
In 6.5 we've mitigated the time it takes to measure strings substantially as text is broken up at appropriate points and these smaller strings measured once and the information cached. This means that the time taken (when repeating the same operation) is essentially measuring the time taken to iterate over the fields data structure to recompute its layout (as after the first time the strings have been measured, they won't be measured again through the slow APIs).
Thus, in the unicode code case, the engine is doing three times as much work as in the non-unicode case. So the timings look like they are demonstrating a constant overhead of around 80ms for doing everything not including iterating over the style runs 10000 times, and 20ms for processing a single style run 10000 times.
So to sum up, the reason the unicode text is slower in the pre-6.5 engine is a combination of the data structure being 3 times the size (3 runs not 1) and the APIs used to measure the unicode bit being substantially slower than that of the non-unicode bit; and the reason it is still slower in the 6.5 engine is just the difference in processing 3 runs not 1 10000 times (since the cost of measurement has - over 10000 runs - been essentially eliminated).
[ Incidentally, the reason the text processing APIs are slower is because they are doing a great deal of work - the characters of the string need to be mapped to fonts that can deal with them, then the characters need to be mapped to glyphs, then the font might have various tables that operate on those glyphs doing things like forming ligatures and performing kerning, and finally the glyphs need to be positioned so that the width of the string can be computed ].