symanto-research/merge-tokenizers

Package to align tokens from different tokenizations.

22
/ 100
Experimental

When working with text data, you often need to compare or combine information from different text analysis models, but these models might break down the same text into words or sub-words in slightly different ways. This tool helps you accurately map and align these differing token lists back to each other, even when one model splits a word into multiple pieces and another keeps it whole. It's for anyone building or using advanced natural language processing (NLP) systems, especially those integrating outputs from various language models.

No commits in the last 6 months.

Use this if you need to precisely connect corresponding words or word fragments (tokens) from the output of two different text processing systems or language models.

Not ideal if you only ever use a single text processing system, or if approximate, rather than precise, alignment between tokenizations is acceptable for your task.

natural-language-processing text-analysis language-model-integration data-alignment computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

Last pushed

Mar 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/symanto-research/merge-tokenizers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.