GT-RIPL/Continual-Learning-Benchmark

Evaluate three types of task shifting with popular continual learning algorithms.

48
/ 100
Emerging

When building AI models that learn new tasks sequentially, this project helps evaluate how well different 'continual learning' techniques prevent the model from forgetting what it learned previously. You feed in various continual learning algorithms and datasets with shifting tasks, and it outputs performance metrics showing how robust each algorithm is to forgetting. This is for AI researchers and machine learning engineers developing or testing new AI systems that need to adapt over time.

523 stars. No commits in the last 6 months.

Use this if you are developing AI models that need to continuously learn new information without forgetting old knowledge and you want to benchmark different algorithms for this problem.

Not ideal if you are looking for a pre-trained model or a simple API to integrate into an existing application, rather than a research and evaluation framework.

continual learning research AI model evaluation catastrophic forgetting deep learning algorithms sequential task learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

523

Forks

89

Language

Python

License

MIT

Last pushed

Apr 26, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/GT-RIPL/Continual-Learning-Benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.