This commit adds a per-lookup caching infrastructure to GSUB/GPOS, and
uses it to cache input ClassDef.get_class value for (Chain)ContextLookupFormat2.
For fonts heavy on use of heave class-based2 context matching, this shows
a good speedup. For NotoNastaliqUrdu for example, I observe 17% speedup.
Unfortunately not many other lookups can use a cache like this :(.
https://github.com/harfbuzz/harfbuzz/pull/3636
Eliminates the usage of a glyph -> klass hash map and replaces it with a vector storing the mapping. This allows us to use the vector directly as the iterator driving the serialize. Approximately 1% speedup for Noto Nastaliq.
we should not cache visited langsys cause 2 different Record<Langsys>
could have different Tag while pointing to the same Langsys, a langsys
is redundant in Record<Langsys> A does not mean it's redundant in Record
B. Same thing for visited_script.
Also adding the number of features in the LangSys's feature list to the
visited langsys count so it's more accurate.
Plus some improvement in langsys compare()
Fixes https://oss-fuzz.com/testcase-detail/5549945449480192
In prune_langsys: move LangSys visited check up before any work is done for a LangSys. In this particular case the compare() method is responsible for the majority of the time spent and wasn't being guarded with a visisted check.
- When pos_glyphs is empty, use current full glyphs set as input for
subsequent recursive closure process
- Also increase max_lookup_visit_count to 35000 cause a real font file hit
previous limit 20000 and some lookups are dropped unexpectedly
ArrayOf.serialize_append allocates space for the new item, but ArrayOf.pop() does not recover the allocated space. So in the case where the revert path was entered the extra space added by serialize_append gets left in the serialization buffer. This moves the snapshot to before ArrayOf.serialize_append is called so that revert cleans up the buffer extend.
The test in question is the one added in c68a00b92e.
Culprit is that it's allocating lots of memory because of region_indices that
are out-of-range anyway. So, try to filter those out first.