Fix patterns starting with `^` in `tokenizer` (#1645)

Previously the "dirty" version of the pattern was used, which could 
result in trying to match with multiple `^`, which failed valid matches.
This commit is contained in:
Guldoman 2023-11-29 17:11:46 +01:00 committed by George Sokianos
parent ee02d0e0b6
commit 234dd40e49
1 changed files with 2 additions and 0 deletions

View File

@ -210,9 +210,11 @@ function tokenizer.tokenize(incoming_syntax, text, state, resume)
-- Remove '^' from the beginning of the pattern
if type(target) == "table" then
target[p_idx] = code:usub(2)
code = target[p_idx]
else
p.pattern = p.pattern and code:usub(2)
p.regex = p.regex and code:usub(2)
code = p.pattern or p.regex
end
end
end