bminixhofer/tokenkit

A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.

35
/ 100
Emerging

This toolkit helps you adapt existing large language models (LLMs) to new tokenizers without needing to retrain them from scratch. You can take an LLM trained with one tokenizer and apply its knowledge to another tokenizer, which is useful for tasks like translating an English-trained model to a new language, combining models with different tokenizers, or experimenting with new tokenization schemes. It's designed for researchers or engineers working on customizing or extending the capabilities of LLMs.

No commits in the last 6 months.

Use this if you need to modify the tokenizer of an existing LLM or transfer its capabilities to a different tokenization system, such as adapting it for a new language or merging models.

Not ideal if you are looking for a tool to train an LLM from scratch or if you don't need to change the underlying tokenizer of your models.

large-language-models natural-language-processing model-adaptation multilingual-nlp model-distillation
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

64

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Jul 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bminixhofer/tokenkit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.