Commit the token sampled with the mask returned from llg_compute_mask().
Can be run on the critical path of sampling (is fast).
Returns 0 on success and -1 on error (use llg_get_error() to get the exact error).
When 0 is returned, the result is written to *res_p.
Compute mask for the next token sampling
It typically takes up to a millisecond for a 100k tokenizer, so should be called in background.
Returns 0 on success and -1 on error (use llg_get_error() to get the exact error).
When 0 is returned, the result is written to *res_p.
Set the default values for the ConstraintInit
Disables ff_tokens and backtracking, enables warnings on stderr
and all logging to the buffer (get with llg_flush_logs()).
You need to set the tokenizer field manually.
Return a string representation of the tokens, useful for debugging.
The output is NUL-terminated.
Returns the number of bytes that would be written to output if output_len was large enough.
flags is one of LLG_DECODE_*
Get the logs from the constraint, since last call to this function.
The logs are null-terminated.
The logs are kept in the constraint until the next call to this function
or until the constraint is freed.
Get the error message from the constraint or null if there is no error.
After it returns a non-null value, it will always return it until the constraint is freed
using llg_free_constraint() (at which point the pointer will be invalid).
Compute the fast-forward (forced) tokens for the current state.
The result is written to output.
Returns the number of tokens written to output (which can be 0) or -1 on error.
Compute the set of allowed tokens for the current state.
The result is written to mask_dest.
mask_byte_len must be equal to llg_matcher_get_mask_byte_size().
Returns 0 on success and -1 on error.
Get the error message from the matcher or null if there is no error.
After it returns a non-null value, it will always return it until the matcher is freed
using llg_free_matcher() (at which point the pointer will be invalid).
Create a new constraint with specified type
Type can be one of “regex”, “json_schema” (or “json”), “lark”, “llguidance” (or “guidance”)
Always returns a non-null value. Call llg_get_error() on the result to check for errors.
Create a new matcher from the given ConstraintInit
Always returns a non-null value. Call llg_matcher_get_error() on the result to check for errors.
init.ff_tokens_ok and init.backtrack_ok are ignored
(backtracking is always disabled, and ff_tokens can be retrieved using llg_matcher_compute_ff_tokens()).
The data is of different format, depending on constraint_type:
Commit a token to the stop-sequence controller.
Returns a valid utf8 string to be returned to the user (which can be empty)
and whether the sequence should be then finished.
The string is valid until the next call to this function, or until the stop-sequence controller is freed.
Return a string representation of the tokens, useful for debugging.
The output is NUL-terminated.
Returns the number of bytes that would be written to output if output_len was large enough.
Tokenize the given bytes and return the tokens.
Always returns the number of tokens that would be written to output_tokens
if output_tokens_len was large enough.
Tokenize the given bytes and return the tokens.
Special tokens will be tokenized, if they follow 0xFF byte prefix.
Always returns the number of tokens that would be written to output_tokens
if output_tokens_len was large enough.
Check if given grammar is valid.
This about twice as fast as creating a matcher (which also validates).
See llg_new_matcher() for the grammar format.
Returns 0 on success and -1 on error and 1 on warning.
The error message or warning is written to message, which is message_len bytes long.
It’s always NUL-terminated.
Tokenization function
Will not write more than output_tokens_len tokens (which can be 0)
Returns the total number of tokens (which can be more than output_tokens_len)
This function has to be thread-safe!