Before, the IntType::cmp functions providing this and was truncating
the hb_codepoint_t to 16bits before comparison. I have no idea how
this was never discovered, and I'm too lazy to try to reproduce this
with Pango (which uses non-16bit codepoint numbers for missing glyphs).
This makes a lot of code safer. We only try modifying the object in one
place, after making sure it's safe to do so. So, do a const_cast<> in
that one place...
Currently:
- Initializing skippy is very expensive,
- Our lookup accelerator (using set-digests) can be very ineffecite,
As such, we end up many times initializing skippy but then failing
coverage check. Reordering fixes that.
When, later, we fix our accelerator to have truly small false-positive
rate (for example by using the frozen-sets), then we might want to
reorder these checks such that we wouldn't calculate coverage number
if skippy is going to fail.
This shows a 5% speedup with Roboto already.
I experimented with replacing use of hb_set_digest_t with this new
hb_frozen_set_t, hoping to get a huge speedup for busy lookups
(like kern lookup in Roboto), but I only got 6% speendup in Roboto
and 4% in NotoNastaliqUrduDraft :(.
This code is C++ only. There isn't a single C++ compiler that fails to
understand the "inline" keyword, since it's required by C++98. Any
compiler older than C++98 is likely to choke on the template usage
further down, so this isn't necessary.
Moreover, the C++ standard says you cannot define macros.
[lib.macro.names] says "Nor shall such a translation unit define macros
for names lexically identical to keywords." -- technically, it's a
promise that the Standard Library headers won't do it, the wording means
that the entire translation unit won't do it, which implies no source
can do it.
MSVC complains about it:
fatal error C1189: #error : The C++ Standard Library forbids macroizing
keywords. Enable warning C4005 to find the forbidden macro.
Author: Thiago Macieira <thiago.macieira@intel.com>
This is by no ways to promote non-Unicode encodings. This is an entry
point that takes Unicode codepoints that happen to all be the first
256 characters and hence fit in 8bit strings. This is useful eg in Chrome
where strings that can fit in 8bit are implemented that way, and this
avoids copying into UTF-8 or UTF-16.
Perhaps we should rename this to hb_buffer_add_codepoints8(). I'm also
curious if anyone would be really interested in hb_buffer_add_codepoints16().
Please discuss!
Roboto has glyphs (like 'F') that have 200 kerning pairs.
Add a handcoded bsearch instead of previous linear search.
This doesn't show much speedup though, apparently we spend the
bulk of the time somewhere before here.
For discussion see:
http://lists.freedesktop.org/archives/harfbuzz/2012-April/001905.html
Over time we have had added NO_HINTING all over the place in hb-ft. Finish it off.
Not setting ppem on hb-font disables get_contour_point() calls which is good anyway.
See comments in the commit.
When I originally wrote hb-ft, FreeType objects did not support reference
counting. As such, hb_ft_face_create() and hb_ft_font_create() had a
"destroy" callback and client was responsible for making sure FT_Face is
kept around as long as the hb-font/face are alive.
However, since this was not clearly documented, some clienets didn't
correctly did that. In particular, some clients assumed that it's safe
to destroy FT_Face and then hb_face_t. This, indeed, used to work, until
45fd9424c7, which make face destroy access
font tables.
Now, I fixed that issue in 395b35903e since
the access was not needed, but the problem remains that not all clients
handle this correctly. See:
https://bugs.freedesktop.org/show_bug.cgi?id=86300
Fortunately, FT_Reference_Face() was added to FreeType in 2010, and so we
can use it now. Originally I wanted to change hb_ft_face_create() and
hb_ft_font_create() to reference the face if destroy==NULL was passed in.
That would improve pretty much all clients, with little undesired effects.
Except that FreeType itself, when compiled with HarfBuzz support, calls
hb_ft_font_create() with destroy==NULL and saves the resulting hb-font on
the ft-face (why does it not free it immediately?). Making hb-face
reference ft-face causes a cycling reference there. At least, that's my
current understanding.
At any rate, a cleaner approach, even if it means all clients will need a
change, is to introduce brand new API. Which this commit does.
Some comments added to hb-ft.h, hoping to make future clients make better
choices.
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=75299
"Fixes" https://bugs.freedesktop.org/show_bug.cgi?id=86300
Based on discussion someone else who had a similar issue, most probably
the user is releasing FT_Face before destructing hb_face_t / hb_font_t.
While that's a client bug, and while we can (and should) use FreeType
refcounting to help avoid that, it happens that we were accessing
the table when we didn't really have to. Avoid that.
Fail if blob start plus length overflows; or if blob length
is greater than 2GB. It takes a while for fonts to get to that
size. In the mean time, it protects against bugs like this:
http://www.icu-project.org/trac/ticket/11450
Also avoids some weird issues with 32bit vs 64bit systems
as we accept length as unsigned int. As such, a length of
-1 will cause overflow on 32bit machines, but happily
accepted on a 64bit machine. Avoid that.
In Oriya, a ZWJ/ZWNJ might be added before candrabindu to encourage
or stop ligation of the candrabindu. This is clearly specified in
the Unicode section on Oriya. Allow it there. Note that Uniscribe
doesn't allow this.
Micro tests added using Noto Sans Oriya draft.
No changes in numbers. Currently at:
BENGALI: 353725 out of 354188 tests passed. 463 failed (0.130722%)
DEVANAGARI: 707307 out of 707394 tests passed. 87 failed (0.0122987%)
GUJARATI: 366349 out of 366457 tests passed. 108 failed (0.0294714%)
GURMUKHI: 60732 out of 60747 tests passed. 15 failed (0.0246926%)
KANNADA: 951190 out of 951913 tests passed. 723 failed (0.0759523%)
KHMER: 299070 out of 299124 tests passed. 54 failed (0.0180527%)
MALAYALAM: 1048147 out of 1048334 tests passed. 187 failed (0.0178378%)
ORIYA: 42320 out of 42329 tests passed. 9 failed (0.021262%)
SINHALA: 271662 out of 271847 tests passed. 185 failed (0.068053%)
TAMIL: 1091753 out of 1091754 tests passed. 1 failed (9.15957e-05%)
TELUGU: 970555 out of 970573 tests passed. 18 failed (0.00185457%)
Otherwise, we might process a lookup thousands of times, with no
benefit. This pathological case was hit by Noto Nastaliq Urdu Draft
in Firefox's code to determine whether space glyph is involved in
any GSUB/GPOS rules. A test page is at http://behdad.org/urdu
See:
https://bugzilla.mozilla.org/show_bug.cgi?id=1090869
Currently doesn't work though, we detect font fallback. Apparently
matching on ct_font is not safe for this. Looks like commit
25f4fb9b56 wasn't enough after all.
After 763e5466c0, one doesn't
need to set flags for different pieces of text. The flags now
are something the client sets up once, depending on how it
actually uses the buffer. As such, don't clear it in
clear_contents().
Tests updated.
We can't really resize buffer and continue in this shaper as we are
using the scratch buffer for string_ref and log_cluster. Restructure
shaper to retry from (almost) scratch.
Apparently those functions documented as sometimes returning NULL
actually exercise that right in OS X 10.10 Yosemite. The scratch
was too small for that. I *think* I fixed it, but haven't tested
as I don't have Yosemite.
Apparently they are not (advertised as?) safe on BSD systems.
We ignore the case of static libraries.
Whitelisted on glibc, Android, and MSVC / mingw.
https://bugs.freedesktop.org/show_bug.cgi?id=82246
The reason we turned it on is because Kazuraki uses it. But that's
not reason enough. Until the OpenType spec gets its act together re
adding design-direction to lookups, this is better user experience.
Previously, we expected users to provide BOT/EOT flags when the
text *segment* was at paragraph boundaries. This meant that for
clients that provide full paragraph to HarfBuzz (eg. Pango), they
had code like this:
hb_buffer_set_flags (hb_buffer,
(item_offset == 0 ? HB_BUFFER_FLAG_BOT : 0) |
(item_offset + item_length == paragraph_length ?
HB_BUFFER_FLAG_EOT : 0));
hb_buffer_add_utf8 (hb_buffer,
paragraph_text, paragraph_length,
item_offset, item_length);
After this change such clients can simply say:
hb_buffer_set_flags (hb_buffer,
HB_BUFFER_FLAG_BOT | HB_BUFFER_FLAG_EOT);
hb_buffer_add_utf8 (hb_buffer,
paragraph_text, paragraph_length,
item_offset, item_length);
Ie, HarfBuzz itself checks whether the segment is at the beginning/end
of the paragraph. Clients that only pass item-at-a-time to HarfBuzz
continue not setting any flags whatsoever.
Another way to put it is: if there's pre-context text in the buffer,
HarfBuzz ignores the BOT flag. If there's post-context, it ignores
EOT flag.
The table can now compile independently too. If we cannot make it work
on MSVC, we can always generate the data and distribute it.
The code now compiles cleanly with:
gcc -c -xc -std=c99 -Werror -pedantic hb-ot-shape-complex-arabic-win1256.hh
g++ -c -xc -std=c++1x -Werror -pedantic hb-ot-shape-complex-arabic-win1256.hh
See:
a97f537cec (commitcomment-7218736)
Bug 1045139 - The Arabic text with "MS Sans Serif" font is rendered bad
https://bugzilla.mozilla.org/show_bug.cgi?id=1045139
This is only enabled on Windows platforms, and requires support from
Uniscribe to work. But for clients that do hook up to Uniscribe, this
fixes shaping of Windows-1256-encoded bitmap fonts like "MS Sans Serif".
The code and table together have just less than a 1kb footprint when
enabled.
UNTESTED. I might even have broken regular Arabic fallback shaping.
Seems to be what Uniscribe does.
At this point I think it's work checking our default...
Fixes Bug 76767 - Zeroing of advance of 2nd component of multiple
substitution with SBL Hebrew
https://bugs.freedesktop.org/show_bug.cgi?id=76767
Micro-test added.
Looks like Unsicribe responds to the 'mymr' tag by zeroing marks
GDEF_LATE instead of generic-shaper UNICODE_LATE. Implement that.
Fixes
Bug 81775 - Incorrect Rendering with harfbuzz-ng myanmar unicode
https://bugs.freedesktop.org/show_bug.cgi?id=81775
Micro-test added based on Padauk.
Follows the order of the Arabic/Syriac specs. Also don't stop
between rlig and calt in non-Arabic scripts.
Micro-tests for Arabic and Mongolian added for the latter.