yenniejun/tokenizers-languages

Comparing LLM tokenizers in multiple languages

20
/ 100
Experimental

This tool helps researchers, linguists, and AI practitioners understand how Large Language Models (LLMs) break down text into 'tokens' across different languages. You input text in various languages, and it shows you how different LLM tokenizers process them, highlighting differences in token length. This is crucial for anyone working with multilingual LLMs to ensure fair and efficient language processing.

No commits in the last 6 months.

Use this if you are developing or evaluating large language models and need to understand how text is tokenized across diverse languages, especially non-English ones.

Not ideal if you are looking for a tool to translate text or analyze the grammatical structure of sentences, as its focus is specifically on tokenization efficiency.

natural-language-processing linguistics AI-model-evaluation multilingual-AI language-technology
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

May 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/yenniejun/tokenizers-languages"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.