sanderland/script_tok
Code for the paper "BPE stays on SCRIPT"
This project provides advanced tools for breaking down text into smaller, meaningful pieces, a process known as tokenization. It takes raw text data in various languages and converts it into a sequence of 'tokens' using specialized encoding methods like SCRIPT or standard UTF-8. Researchers and engineers working on natural language processing (NLP) models, especially those dealing with many languages, would use these tools.
Use this if you are developing or training large language models and need precise control over how text is tokenized, particularly for multilingual datasets, to improve model performance and efficiency.
Not ideal if you are looking for a simple, off-the-shelf tokenizer for a single language or if your NLP task does not involve training custom language models.
Stars
16
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/sanderland/script_tok"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
georg-jung/FastBertTokenizer
Fast and memory-efficient library for WordPiece tokenization as it is used by BERT.
ml-rust/splintr
A high-performance tokenizer (BPE + SentencePiece) built with Rust with Python bindings, focused...
ash-01xor/bpe.c
Simple Byte pair Encoding mechanism used for tokenization process . written purely in C
U4RASD/r-bpe
R-BPE: Improving BPE-Tokenizers with Token Reuse
jmaczan/bpe-tokenizer
Byte-Pair Encoding tokenizer for training large language models on huge datasets