cahya-wirawan/rwkv-tokenizer

A fast RWKV Tokenizer written in Rust

44
/ 100
Emerging

This tool helps developers working with RWKV language models by quickly converting plain text into numerical tokens (encoding) and vice-versa (decoding). It takes raw text in various languages and outputs sequences of numbers, which are essential for model processing, and can also reconstruct text from these numerical sequences. It's designed for developers building or integrating RWKV-based applications.

No commits in the last 6 months. Available on PyPI.

Use this if you are a developer needing a very fast and accurate tokenizer for RWKV v5+ models in Rust, Python, or WebAssembly applications.

Not ideal if you are working with other large language models (like BERT, LLaMA, or Mistral) that require a different tokenizer, or if you don't need the speed benefits of a Rust-based solution.

Large Language Models Natural Language Processing Machine Learning Development Text Preprocessing RWKV Models
Stale 6m No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 9 / 25

How are scores calculated?

Stars

54

Forks

4

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Aug 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/cahya-wirawan/rwkv-tokenizer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.