Behdad Esfahbod
|
27c4037e59
|
[gvar] Micro-optimize boundary-checking
|
2022-11-22 13:12:22 -07:00 |
Behdad Esfahbod
|
ab8346fb6f
|
[gvar] Add an unlikely
|
2022-11-22 13:07:39 -07:00 |
Behdad Esfahbod
|
1e8a342ea2
|
[gvar] Micro-optimize int types
|
2022-11-22 13:04:32 -07:00 |
Behdad Esfahbod
|
4afcdf675b
|
More hb_memcpy
|
2022-11-22 12:56:48 -07:00 |
Behdad Esfahbod
|
58a696d80e
|
More hb_memset
|
2022-11-22 12:56:05 -07:00 |
Behdad Esfahbod
|
59c45f6deb
|
Use hb_memcpy instead of memcpy consistently
|
2022-11-22 12:54:50 -07:00 |
Behdad Esfahbod
|
ac0efaf818
|
Use hb_memset instead of memset consistently
|
2022-11-22 12:50:36 -07:00 |
Behdad Esfahbod
|
44a892a233
|
[shape] Use hb_memcmp instead of memcmp
|
2022-11-22 12:48:52 -07:00 |
Behdad Esfahbod
|
c53c648127
|
[subset-cff] Another handrolled memcpy
|
2022-11-22 12:46:25 -07:00 |
Behdad Esfahbod
|
ae578705c2
|
[array] Write hash as range for loop again
Now that our range loop is faster than our own iter.
|
2022-11-22 12:23:17 -07:00 |
Behdad Esfahbod
|
13e1ca9eb5
|
[cff] Micro-optimize memcpy
|
2022-11-22 12:19:28 -07:00 |
Behdad Esfahbod
|
2968dd7844
|
[gvar] Optimize as_array() access
|
2022-11-22 11:57:29 -07:00 |
Behdad Esfahbod
|
bb3bb76450
|
[gvar] Optimize scalar = 1.0 case
|
2022-11-22 11:53:35 -07:00 |
Behdad Esfahbod
|
2d098d5d7f
|
[gvar] Use memset
|
2022-11-22 11:51:04 -07:00 |
Behdad Esfahbod
|
e630a65e60
|
[gvar] Micro-optize vector extend
|
2022-11-22 11:29:13 -07:00 |
Behdad Esfahbod
|
49d4f62135
|
[gvar] Micro-optimize
|
2022-11-22 11:14:56 -07:00 |
Behdad Esfahbod
|
1758ee6646
|
[glyf] Minor write loop more idiomatically
|
2022-11-22 10:55:16 -07:00 |
Behdad Esfahbod
|
16ec9dcc1b
|
[gvar] Whitespace
|
2022-11-22 10:55:16 -07:00 |
Behdad Esfahbod
|
b567ce51d3
|
[subset] Don't trim glyf's again if preprocessed
Speeds up M1/10000 benchmark by 30%!
|
2022-11-22 10:55:08 -07:00 |
Behdad Esfahbod
|
72059a4789
|
[gvar] Optimize IUP alg
|
2022-11-22 10:41:37 -07:00 |
Behdad Esfahbod
|
ee9873b5ed
|
[gvar] Disable initializing vectors when not necessary
|
2022-11-22 10:23:17 -07:00 |
Behdad Esfahbod
|
b0d2641186
|
[vector] Add "initialize" argument to resize()
|
2022-11-22 10:20:11 -07:00 |
Behdad Esfahbod
|
a2059f8f55
|
[gvar] Optimize unpack_points
|
2022-11-22 10:16:21 -07:00 |
Behdad Esfahbod
|
6d7206b68b
|
[gvar] Optimize unpack_deltas
|
2022-11-22 10:13:14 -07:00 |
Behdad Esfahbod
|
bca569ae53
|
[array] Speed up hash() for byte arrays
|
2022-11-21 23:19:42 -07:00 |
Behdad Esfahbod
|
d7b492e3f5
|
Revert "[array] Remove hash specializations for bytes"
This reverts commit 213117317c .
|
2022-11-21 23:08:51 -07:00 |
Behdad Esfahbod
|
1572ba281a
|
[subset-cff] Return in subr closure if already seen subr
Not sure why this was not done before.
|
2022-11-21 22:26:44 -07:00 |
Behdad Esfahbod
|
a29ca6efbc
|
[subset-cff] Comment
|
2022-11-21 22:02:17 -07:00 |
Behdad Esfahbod
|
28e767ddea
|
[subset-cff] Really optimize op_str_t / parsed_cs_op_t layout
Now parsed_cs_op_t and op_str_t are both 16 bytes.
Saves another 7% in SourceHanSans/10000 benchmark.
|
2022-11-21 21:59:51 -07:00 |
Behdad Esfahbod
|
2d5ee23731
|
[subset-cff] Readjust parsed_cs_op_t
Now it doesn't matter anymore since op_str_t is adjusted and
is 16 bytes with 8byte alignment.
|
2022-11-21 21:55:21 -07:00 |
Behdad Esfahbod
|
4f056b923a
|
[subset-cff] Optimize op_str_t layout
|
2022-11-21 21:37:57 -07:00 |
Behdad Esfahbod
|
a750cb0128
|
Simplify rvalue creation
|
2022-11-21 21:03:32 -07:00 |
Behdad Esfahbod
|
86a763c651
|
[map] Make keys moveable
|
2022-11-21 20:53:44 -07:00 |
Behdad Esfahbod
|
cf20d2ec5d
|
[map] Take const key
|
2022-11-21 20:47:17 -07:00 |
Behdad Esfahbod
|
3d1c76f713
|
[serializer] Don't hash objects twice
|
2022-11-21 19:40:32 -07:00 |
Behdad Esfahbod
|
35878df215
|
[algs] Implement swap() for pair_t
Helps priority_queue::pop_minimum and friends, which help subsetter
repacker. Shows a few percentage improvement on NotoNastaliq benchmark.
|
2022-11-21 19:14:03 -07:00 |
Behdad Esfahbod
|
a2984a2932
|
[cff] Remove unnecessary namespacing
|
2022-11-21 18:40:52 -07:00 |
Behdad Esfahbod
|
dc3bb5e0ed
|
[subset-cff] Pre-allocate values array for subroutines as well
|
2022-11-21 18:18:48 -07:00 |
Behdad Esfahbod
|
c6279224db
|
[cff] Adjust pre-allocation
This better matches actual usage, given that ops are one or two
bytes, and vector also allocates 50% extra.
|
2022-11-21 18:01:50 -07:00 |
Behdad Esfahbod
|
bab8ec58b0
|
[subset-cff] Disable sharing when packing charstring INDEXes
Saves another 8%ish.
|
2022-11-21 17:46:32 -07:00 |
Behdad Esfahbod
|
2cadacad6c
|
[cff] Simplify str_encoder_t error handling
|
2022-11-21 17:17:15 -07:00 |
Behdad Esfahbod
|
f263e3fe2e
|
[cff] Manually copy short strings instead of memcpy()
|
2022-11-21 17:04:55 -07:00 |
Behdad Esfahbod
|
38efd1862f
|
[cff] Add a likely()
|
2022-11-21 17:02:11 -07:00 |
Behdad Esfahbod
|
191025cc96
|
[cff] Adjust buffer pre-allocation
Most ops take one or two bytes, so allocate count*2, not count*3.
Shows minor speedup in subsetting benchmark (around 2%).
|
2022-11-21 16:58:19 -07:00 |
Behdad Esfahbod
|
4b2caafea2
|
[subset-cff] Optimize parsed_cs_op_t size
Shows 5% speedup on SourceHanSans-Regular/10000 benchmark.
|
2022-11-21 16:46:20 -07:00 |
Behdad Esfahbod
|
e0b06bd1b1
|
[subset] Cache has_seac in accelerator
Speeds up SourceHanSans-Regular/10000 benchmark by %25.
|
2022-11-21 16:30:34 -07:00 |
Garret Rieger
|
dd1ba328a8
|
[repacker] fix fuzzer timeout.
For https://oss-fuzz.com/testcase-detail/5845846876356608. Only process the set of unique overflows.
|
2022-11-21 16:24:48 -07:00 |
Behdad Esfahbod
|
59451502e9
|
[cff] Optimize env error checking
|
2022-11-21 15:23:16 -07:00 |
Behdad Esfahbod
|
b238578a9c
|
[cff] Optimize INDEX operator[]
|
2022-11-21 14:36:57 -07:00 |
Behdad Esfahbod
|
d9de515a38
|
[cff] Optimize byte_str_ref_t array access
|
2022-11-21 14:23:07 -07:00 |