old-wang-95/easy-bert
easy-bert是一个中文NLP工具,提供诸多bert变体调用和调参方法,极速上手;清晰的设计和代码注释,也很适合学习
This tool helps people working with Chinese text data to quickly train and use various BERT models for tasks like classifying text into categories (e.g., positive/negative reviews) or labeling parts of a sentence (e.g., identifying names or locations). You input raw Chinese text and corresponding labels, and it outputs a trained model ready to make predictions on new text, or the predicted labels for your text. It's designed for data scientists, NLP engineers, or researchers who need to apply state-of-the-art language models to Chinese language problems.
No commits in the last 6 months.
Use this if you need to rapidly experiment with different BERT models and fine-tuning configurations for Chinese text classification or sequence labeling tasks, or if you want to apply knowledge distillation to speed up inference.
Not ideal if your primary focus is on non-Chinese languages or if you need to implement highly custom, low-level modifications to the BERT architecture itself beyond parameter tuning.
Stars
83
Forks
14
Language
Python
License
MIT
Category
Last pushed
Nov 08, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/old-wang-95/easy-bert"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KoichiYasuoka/esupar
Tokenizer POS-Tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models for Japanese and...
hellohaptik/multi-task-NLP
multi_task_NLP is a utility toolkit enabling NLP developers to easily train and infer a single...
taishi-i/nagisa_bert
A BERT model for nagisa
ant-louis/netbert
📶 NetBERT: a domain-specific BERT model for computer networking.
AndyTheFactory/RO-Diacritics
Python package for Romanian diacritics restoration