CLUEbenchmark/LightLM
高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task
This project helps natural language processing (NLP) researchers and engineers develop and evaluate efficient Chinese language models. It provides a platform to submit models, which are then tested on various downstream tasks like named entity recognition, reading comprehension, and keyword identification. The platform takes your pre-trained Chinese language model and outputs a performance score based on accuracy, inference time, and model size.
No commits in the last 6 months.
Use this if you are an NLP researcher or engineer focused on creating highly efficient, compact Chinese language models for real-world applications.
Not ideal if you are a business user or an individual looking for an out-of-the-box solution for Chinese text analysis without deep NLP expertise.
Stars
60
Forks
13
Language
Python
License
—
Category
Last pushed
Jun 01, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/CLUEbenchmark/LightLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
codertimo/BERT-pytorch
Google AI 2018 BERT pytorch implementation
JayYip/m3tl
BERT for Multitask Learning
920232796/bert_seq2seq
pytorch实现 Bert 做seq2seq任务,使用unilm方案,现在也可以做自动摘要,文本分类,情感分析,NER,词性标注等任务,支持t5模型,支持GPT2进行文章续写。
sileod/tasknet
Easy modernBERT fine-tuning and multi-task learning
graykode/toeicbert
TOEIC(Test of English for International Communication) solving using pytorch-pretrained-BERT model.