also increase the HB_GLYF_MAX_POINTS limit to 20000 cause the test file has a
.notdef glyph which is a composite glyph and has 10176 points after
get_points() call
This previously incorrectly collected lookups that could be reached via feature variations that are dropped and not activated by the current instance position.
Also updated the script that is used to generate tests.With fonttools,
we now do instancing first and then subsetting.
With different order of subsetting and instancing operations on the same
VF file, fonttools seems to generate 2 different font files with different
glyph set.
1. do subsetting and then instancing: this seems result in a larger glyph
set in the font file. Lookups are collected from both retained features
and all possible alternate featurevariations, this leads to a larger
glyph set after glyph closurei. And instancer doesn't redo glyph
closure, it does lookups pruning only.
2. do instancing and then subsetting: lookups are collected from
features that are replaced already and possible alternate feature
variations
The existing code doesn't correctly handle the case where palettes partially overlap in the color record array. This changes the subsetting to only share entries in the color record array when palettes have the same first color index. Partially overlapping palettes will be converted to disjoint segments in the color record array.
Updates one of the color tests to use multiple palettes.
Also fixes fuzzer: https://oss-fuzz.com/testcase-detail/5568200165687296.
we should not cache visited langsys cause 2 different Record<Langsys>
could have different Tag while pointing to the same Langsys, a langsys
is redundant in Record<Langsys> A does not mean it's redundant in Record
B. Same thing for visited_script.
Also adding the number of features in the LangSys's feature list to the
visited langsys count so it's more accurate.
Plus some improvement in langsys compare()
cur_intersected_glyphs gets modified during recursion leading to incorrect filtering of sub tables in some cases. So don't use cur_intersected_glyphs. Instead just add an additional entry onto the parent_active_glyphs () stack.
Additionaly expands NotoNastaliqUrdu tests to include coverage of the issue from #3397.
- When pos_glyphs is empty, use current full glyphs set as input for
subsequent recursive closure process
- Also increase max_lookup_visit_count to 35000 cause a real font file hit
previous limit 20000 and some lookups are dropped unexpectedly
Ligature subtables use virtual links to enforce an ordering constraint between the subtables and the coverage table. Unfortunately this has the sideeffect of prevent the subtables from being shared by another Ligature with a different coverage table since object equality compares all links real and virtual. This change makes virtual links stored separately from real links and updates the equality check to only check real links. If an object is de-duped any virtual links it has are merged into the object that replaces it.
The current CMAP4 implementation uses whatever the current codepoint ranges are and then encodes them as indivudal glyph ids or as a delta if possible. However, it's often possible to save bytes by splitting up existing ranges and encoding parts of them using deltas where the cost of splitting the range is less than encoding each glyph individual.
Though the spec said FeatureRecords are sorted alphabetically by feature
tag, there're font files with unsorted FeatureList. And harfbuzz is not
able to subset these files correctly because we use binary search in
finding featureRecords when collecting lookups. Also
find_duplicate_features needs to be updated to handle this.
In Windows 7 on Chrome if the coverage table comes before any of the LigatureSet or Ligature subtables the font won't load. This changes the packing order to always place the Coverage table last. Virtual links are used to ensure the repacker maintains the desired ordering.
Coincidentally fontTools also does the same thing (a3f988fbf6/Lib/fontTools/ttLib/tables/otTables.py (L1137)) to reduce overflows during packing.
ArrayOf.serialize_append allocates space for the new item, but ArrayOf.pop() does not recover the allocated space. So in the case where the revert path was entered the extra space added by serialize_append gets left in the serialization buffer. This moves the snapshot to before ArrayOf.serialize_append is called so that revert cleans up the buffer extend.
Fixes#2361. Stores tables in the builder in a hashmap so you end up with at most one copy of each table. Table serialization order is now based on tag sort order instead of order of insertion into the builder.