SALT-NLP/IDBR

Codes for the paper: "Continual Learning for Text Classification with Information Disentanglement Based Regularization"

29
/ 100
Experimental

This project helps machine learning engineers and researchers to train text classification models efficiently when new text datasets or categories emerge over time. It takes in sequences of labeled text data for different classification tasks (e.g., news categories, product reviews, sentiment) and outputs trained models that can classify new text while minimizing 'forgetting' previously learned categories. This is particularly useful for those managing evolving text-based AI systems.

No commits in the last 6 months.

Use this if you need to continuously update a text classification system with new types of text or categories without having to retrain from scratch on all historical data, while maintaining performance on older tasks.

Not ideal if you are building a text classifier for a single, static set of categories and do not anticipate new classification tasks over time.

continual learning text classification natural language processing machine learning research AI model deployment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

44

Forks

2

Language

Python

License

MIT

Last pushed

Feb 09, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SALT-NLP/IDBR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.