Fix patterns starting with `^` in `tokenizer` (#1645)
Previously the "dirty" version of the pattern was used, which could result in trying to match with multiple `^`, which failed valid matches.
This commit is contained in:
parent
1c10bb894d
commit
e95094f0ca
|
@ -210,9 +210,11 @@ function tokenizer.tokenize(incoming_syntax, text, state, resume)
|
||||||
-- Remove '^' from the beginning of the pattern
|
-- Remove '^' from the beginning of the pattern
|
||||||
if type(target) == "table" then
|
if type(target) == "table" then
|
||||||
target[p_idx] = code:usub(2)
|
target[p_idx] = code:usub(2)
|
||||||
|
code = target[p_idx]
|
||||||
else
|
else
|
||||||
p.pattern = p.pattern and code:usub(2)
|
p.pattern = p.pattern and code:usub(2)
|
||||||
p.regex = p.regex and code:usub(2)
|
p.regex = p.regex and code:usub(2)
|
||||||
|
code = p.pattern or p.regex
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
Loading…
Reference in New Issue