microsoft/LiST
Lite Self-Training
This helps AI/ML researchers and data scientists efficiently train language models for classification tasks, even when labeled data is scarce. You input a small set of labeled text examples along with a larger pool of unlabeled text, and it outputs a highly accurate text classifier. This tool is for those who need to build robust text classification systems with limited initial data.
No commits in the last 6 months.
Use this if you are developing AI/ML models and need to train high-performing text classifiers using very few labeled examples.
Not ideal if you already have abundant labeled data for your text classification task or are not working with language models.
Stars
30
Forks
7
Language
Python
License
MIT
Category
Last pushed
Jul 25, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/microsoft/LiST"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
airaria/TextBrewer
A PyTorch-based knowledge distillation toolkit for natural language processing
sunyilgdx/NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original...
kssteven418/LTP
[KDD'22] Learned Token Pruning for Transformers
princeton-nlp/CoFiPruning
[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
georgian-io/Transformers-Domain-Adaptation
:no_entry: [DEPRECATED] Adapt Transformer-based language models to new text domains