Behdad Esfahbod
|
8bfeea4828
|
[subset] Compute set max using previous()
|
2022-05-05 11:13:57 -06:00 |
Behdad Esfahbod
|
00cb8c629d
|
[subset] Don't go into glyf table if it's empty
|
2022-05-05 11:13:57 -06:00 |
Behdad Esfahbod
|
4fe69bc413
|
[subset] Use del_range in _remove_invalid_gids
|
2022-05-05 11:13:57 -06:00 |
Behdad Esfahbod
|
2a42edccbe
|
[subset] Cosmetic; use set bulk array population instead of for loop
|
2022-05-05 10:35:54 -06:00 |
Garret Rieger
|
bc5129d7fa
|
[perf] use option_t in subset benchmark to select between glyphs and codepoint subset.
|
2022-05-05 10:01:49 -06:00 |
Behdad Esfahbod
|
43938ecdc2
|
[subset] Remove outdated comment
I tried something like that. It was slower because of the allocations.
|
2022-05-04 16:59:28 -06:00 |
Garret Rieger
|
6212856ce8
|
[perf] benchmark subsetting via glyphs.
|
2022-05-04 16:50:06 -06:00 |
Behdad Esfahbod
|
6829dd30ad
|
Merge pull request #3562 from harfbuzz/subset-cmap-no-qsort
[subset] In cmap planning, remove a qsort()
|
2022-05-04 16:49:45 -06:00 |
Behdad Esfahbod
|
50db78ba83
|
[subset] In cmap planning, remove a qsort()
|
2022-05-04 16:18:27 -06:00 |
Behdad Esfahbod
|
052812b6ba
|
Merge pull request #3561 from googlefonts/cmap_opt
[subset] Further cmap subsetting speed optimizations
|
2022-05-04 15:38:30 -06:00 |
Garret Rieger
|
7cb36e4222
|
[subset] Re-introduce size threshold in choosing unicode collection method.
Threshold is needed since the unicodes set might be an inverted set.
|
2022-05-04 21:22:26 +00:00 |
Garret Rieger
|
42c54eba83
|
[subset] Presize unicode to gid list to unicodes + glyphs size.
|
2022-05-04 20:21:43 +00:00 |
Garret Rieger
|
7c7c01d28c
|
[subset] Remove switch to alternate unicode collection at large subset sizes.
Benchmarks show that the first path is always faster even at large subset sizes:
BM_subset_codepoints/subset_roboto/10_median +0.0324 +0.0325 0 0 0 0
BM_subset_codepoints/subset_roboto/64_median +0.0253 +0.0255 0 1 0 1
BM_subset_codepoints/subset_roboto/512_median +0.0126 +0.0128 1 1 1 1
BM_subset_codepoints/subset_roboto/4000_median +0.0500 +0.0491 6 7 6 7
BM_subset_codepoints/subset_amiri/10_median +0.0338 +0.0332 1 1 1 1
BM_subset_codepoints/subset_amiri/64_median +0.0238 +0.0234 1 1 1 1
BM_subset_codepoints/subset_amiri/512_median +0.0066 +0.0063 8 8 8 8
BM_subset_codepoints/subset_amiri/4000_median -0.0011 -0.0012 13 13 13 13
BM_subset_codepoints/subset_noto_nastaliq_urdu/10_median +0.0226 +0.0226 0 0 0 0
BM_subset_codepoints/subset_noto_nastaliq_urdu/64_median +0.0047 +0.0044 20 20 20 20
BM_subset_codepoints/subset_noto_nastaliq_urdu/512_median +0.0022 +0.0021 165 166 165 166
BM_subset_codepoints/subset_noto_nastaliq_urdu/1000_median -0.0021 -0.0023 166 166 166 165
BM_subset_codepoints/subset_noto_devangari/10_median +0.0054 +0.0054 0 0 0 0
BM_subset_codepoints/subset_noto_devangari/64_median +0.0024 +0.0019 0 0 0 0
BM_subset_codepoints/subset_noto_devangari/512_median +0.0089 +0.0090 5 5 5 5
BM_subset_codepoints/subset_noto_devangari/1000_median -0.0028 -0.0019 5 5 5 5
BM_subset_codepoints/subset_mplus1p/10_median +0.0001 +0.0002 0 0 0 0
BM_subset_codepoints/subset_mplus1p/64_median +0.0073 +0.0075 1 1 1 1
BM_subset_codepoints/subset_mplus1p/512_median +0.0034 +0.0034 1 1 1 1
BM_subset_codepoints/subset_mplus1p/4096_median -0.1248 -0.1248 7 6 7 6
BM_subset_codepoints/subset_mplus1p/10000_median -0.0885 -0.0885 13 12 13 12
BM_subset_codepoints/subset_notocjk/10_median +0.0031 +0.0032 2 2 2 2
BM_subset_codepoints/subset_notocjk/64_median -0.0010 -0.0010 2 2 2 2
BM_subset_codepoints/subset_notocjk/512_median -0.0023 -0.0023 9 9 9 9
BM_subset_codepoints/subset_notocjk/4096_median -0.1725 -0.1726 28 23 28 23
BM_subset_codepoints/subset_notocjk/32768_median -0.0277 -0.0287 140 137 140 136
BM_subset_codepoints/subset_notocjk/100000_median -0.0929 -0.0926 162 147 162 147
|
2022-05-03 22:45:39 +00:00 |
Garret Rieger
|
f0c04114bc
|
[subset] Embed unicode to gid list vector in subset plan.
|
2022-05-03 22:02:59 +00:00 |
Behdad Esfahbod
|
f67e6bf79c
|
[perf/benchmark-font] Add benchmark for glyph_h_advance
|
2022-05-02 16:59:48 -06:00 |
Behdad Esfahbod
|
1c0a3d4d16
|
[perf/benchmark-font] Add a couple Noto fonts
|
2022-05-02 16:50:54 -06:00 |
Behdad Esfahbod
|
15fa8afb21
|
Add fast-path for big-endian 32-bit byteswap
Speeds up cmap format-12 decoding by some 40% as measured by
the newly added test in perf/benchmark-font!
|
2022-05-02 16:46:41 -06:00 |
Behdad Esfahbod
|
3fff2e9182
|
[perf/benchmark-font] Cosmetic
|
2022-05-02 16:42:10 -06:00 |
Behdad Esfahbod
|
307d2d8bb6
|
[cmap] Sprinkle some 'unlikely's
|
2022-05-02 16:30:22 -06:00 |
Garret Rieger
|
85ec5cbcef
|
[subset] In _populate_unicodes_to_retain populate unicodes in order.
Allows the set insert to take advantage of page lookup cache.
|
2022-05-02 22:29:43 +00:00 |
Behdad Esfahbod
|
0d1f8dcaf3
|
[perf/benchmark-font] Actually make nominal_glyph bench work
|
2022-05-02 16:18:53 -06:00 |
Behdad Esfahbod
|
6cf69d10e7
|
[perf/benchmark-font] Add back testing of is_variable
|
2022-05-02 16:07:32 -06:00 |
Behdad Esfahbod
|
3aa2ff7988
|
[perf/benchmark-font] Fix build without freetype
|
2022-05-02 16:01:22 -06:00 |
Behdad Esfahbod
|
58a0988b57
|
[perf/benchmark-font] Benchmark get_nominal_glyph
|
2022-05-02 15:57:19 -06:00 |
Behdad Esfahbod
|
03f16fab58
|
Merge pull request #3560 from harfbuzz/perf-cleanup
Perf cleanup
|
2022-05-02 15:44:41 -06:00 |
Garret Rieger
|
088133d939
|
[subset] cache cp to new gid list in subset plan.
This avoids having to recompute the ordered list multiple times during cmap generation.
|
2022-05-02 21:29:16 +00:00 |
Behdad Esfahbod
|
6d29903e86
|
[perf/benchmark-font] Parametrize test
|
2022-05-02 14:03:15 -06:00 |
Behdad Esfahbod
|
636c90e81c
|
[perf/perf] Rename to benchmark-font
|
2022-05-02 13:41:49 -06:00 |
Behdad Esfahbod
|
036d03d2e9
|
[perf/perf] Move all logic to perf-draw, for now
To be renamed.
|
2022-05-02 13:40:13 -06:00 |
Behdad Esfahbod
|
746c3c03c5
|
[perf/perf] Remove ttf-parser backend
|
2022-05-02 13:27:32 -06:00 |
Behdad Esfahbod
|
4aaa0af7d9
|
[perf/perf] Rely on hb-draw to measure ft performance
|
2022-05-02 13:06:27 -06:00 |
Behdad Esfahbod
|
a4522df378
|
Merge pull request #3558 from harfbuzz/set-optimize
[perf] hb_set_t optimizations and perf suite improvements
|
2022-04-29 18:34:00 -06:00 |
Garret Rieger
|
6922a2561f
|
[subset] Change serialize_rangeoffset_glyid back to using iterator.
|
2022-04-29 23:30:32 +00:00 |
Garret Rieger
|
c66fd50c26
|
[subset] in cmap4 serialization save cp to gid iter to memory.
Iterator accesses are slow and it's iterated multiple times.
|
2022-04-29 23:18:53 +00:00 |
Garret Rieger
|
17b98563dc
|
[subset] In cmap4 serialization reduce unnessecary calls into the iterator.
Gives ~20% speedup for large subsets.
|
2022-04-29 22:49:02 +00:00 |
Garret Rieger
|
5e241094bf
|
[subset] In unicodes cache cleanup if set insert fails.
|
2022-04-29 22:45:16 +00:00 |
Behdad Esfahbod
|
217d38dfc7
|
Try to fix distcheck
|
2022-04-29 16:19:10 -06:00 |
Garret Rieger
|
a424a92ce5
|
[subset] s/void */intptr_t.
|
2022-04-29 22:14:03 +00:00 |
Garret Rieger
|
aad67f5629
|
[subset] cache results of collect_unicodes.
|
2022-04-29 22:05:34 +00:00 |
Behdad Esfahbod
|
35681b3edb
|
[benchmark-shape] Break lines and shape separately
|
2022-04-29 16:02:55 -06:00 |
Behdad Esfahbod
|
be1ac9c572
|
[benchmark-shape] Data-driven test sets
|
2022-04-29 15:55:19 -06:00 |
Behdad Esfahbod
|
ae3efc6424
|
[perf] Spawn off benchmark-shape from perf runner
|
2022-04-29 15:37:11 -06:00 |
Behdad Esfahbod
|
5f43ce825a
|
[benchmark-set] Split SetLookup into an ordered and random version
|
2022-04-29 13:39:15 -06:00 |
Behdad Esfahbod
|
ae9c7b861b
|
[benchmark-set] At least increase needle by one in lookup benchmark
|
2022-04-29 13:39:04 -06:00 |
Behdad Esfahbod
|
68a9b83d15
|
[benchmark-set] At least increase needle by one in lookup benchmark
|
2022-04-29 13:28:07 -06:00 |
Garret Rieger
|
b4236b7de6
|
[subset] Optimize Cmap4 collect_unicodes.
Use set add_range() instead of individual add() calls.
|
2022-04-29 19:22:00 +00:00 |
Behdad Esfahbod
|
5866ec05f5
|
[benchmark-map] Remove rand() overhead from benchmark
|
2022-04-29 13:14:41 -06:00 |
Behdad Esfahbod
|
067225a86d
|
[set] Optimize const page_for() using last_page_lookup caching
Similar to previous commit.
This speeds up SetLookup benchmark by 50%, but that's because that
lookup always hits the same page...
|
2022-04-29 13:04:36 -06:00 |
Behdad Esfahbod
|
c283e41ce3
|
[set] Optimize non-const page_for() using last_page_lookup caching
This speeds up SetOrderedInsert tests by 15 to 40 percent, and the
subset_mplus1p benchmarks by 9 to 27 percent.
|
2022-04-29 12:45:48 -06:00 |
Behdad Esfahbod
|
dd005911b9
|
[benchmark-set] Reduce lookup benchmark overhead
Turnsout 90% was overhead... Now lookup is in the 4ns ballpark.
|
2022-04-29 12:23:53 -06:00 |