mxcoras/jieba-next
Use Rust to Speed up jieba 高效、现代的中文分词库
This tool helps you quickly and accurately break down Chinese text into individual words, a process called Chinese word segmentation. You input raw Chinese sentences or documents, and it outputs a sequence of segmented words. This is useful for anyone working with Chinese text data, such as natural language processing engineers or data scientists.
Available on PyPI.
Use this if you need a high-performance, modern, and reliable solution for Chinese word segmentation in your data analysis or NLP applications.
Not ideal if your primary focus is on languages other than Chinese, as this tool is specifically designed for Chinese text segmentation.
Stars
10
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jan 29, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/mxcoras/jieba-next"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PyThaiNLP/pythainlp
Thai natural language processing in Python
hankcs/HanLP
Natural Language Processing for the next decade. Tokenization, Part-of-Speech Tagging, Named...
dongrixinyu/JioNLP
中文 NLP 预处理、解析工具包,准确、高效、易用 A Chinese NLP Preprocessing & Parsing Package www.jionlp.com
jacksonllee/pycantonese
Cantonese Linguistics and NLP
hankcs/pyhanlp
中文分词