This commit adds a per-lookup caching infrastructure to GSUB/GPOS, and
uses it to cache input ClassDef.get_class value for (Chain)ContextLookupFormat2.
For fonts heavy on use of heave class-based2 context matching, this shows
a good speedup. For NotoNastaliqUrdu for example, I observe 17% speedup.
Unfortunately not many other lookups can use a cache like this :(.
https://github.com/harfbuzz/harfbuzz/pull/3636
For BASE table format 1.1, the handling of design
space vs user space coordinates was inconsistent.
We were applying design -> user transformation
twice for the deltas, leading to wrong baseline
values.
Patch by Ebrahim Byagowi <ebrahim@gnu.org>
Fixes: #3476
Add HB_OT_LAYOUT_BASELINE_TAG_IDEO_EMBOX_CENTRAL
and HB_OT_LAYOUT_BASELINE_TAG_IDEO_FACE_CENTRAL
which are the centers of the ideographic em-box
and face box.
Add a variation of hb_ot_layout_get_baseline that
synthesizes missing baselines, using heuristics in part
taken from the CSS Inline Layout Module, Level 3.
Includes some new tests for synthesized baselines.
The base2.ttf is a subset of Noto Sans Bengali that
includes just the Bengali Ka.
New API: hb_ot_layout_get_baseline_with_fallback
cur_intersected_glyphs gets modified during recursion leading to incorrect filtering of sub tables in some cases. So don't use cur_intersected_glyphs. Instead just add an additional entry onto the parent_active_glyphs () stack.
Additionaly expands NotoNastaliqUrdu tests to include coverage of the issue from #3397.
This line had me confused for a second because
the condition looked like a cast and the if just
looked misplaced. Add a line break to prevent
such confusion.
Though the spec said FeatureRecords are sorted alphabetically by feature
tag, there're font files with unsorted FeatureList. And harfbuzz is not
able to subset these files correctly because we use binary search in
finding featureRecords when collecting lookups. Also
find_duplicate_features needs to be updated to handle this.
Precompute a feature index filter to avoid needing to iterate the feature tag list for each encountered feature index. For this particular fuzzer case speeds up feature collection from 50s to 2s.
Made sure clear_output is always paired with swap_buffers.
Trying to see if we can move towards RAII-like buffer iterators
instead of the buffer keeping an iterator internally.