SALT-NLP/IDBR
Codes for the paper: "Continual Learning for Text Classification with Information Disentanglement Based Regularization"
This project helps machine learning engineers and researchers to train text classification models efficiently when new text datasets or categories emerge over time. It takes in sequences of labeled text data for different classification tasks (e.g., news categories, product reviews, sentiment) and outputs trained models that can classify new text while minimizing 'forgetting' previously learned categories. This is particularly useful for those managing evolving text-based AI systems.
No commits in the last 6 months.
Use this if you need to continuously update a text classification system with new types of text or categories without having to retrain from scratch on all historical data, while maintaining performance on older tasks.
Not ideal if you are building a text classifier for a single, static set of categories and do not anticipate new classification tasks over time.
Stars
44
Forks
2
Language
Python
License
MIT
Category
Last pushed
Feb 09, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SALT-NLP/IDBR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
debjitpaul/refiner
About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate...
THUDM/P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
ZixuanKe/PyContinual
PyContinual (An Easy and Extendible Framework for Continual Learning)
arazd/ProgressivePrompts
Progressive Prompts: Continual Learning for Language Models
Nithin-Holla/MetaLifelongLanguage
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for...