GT-RIPL/Continual-Learning-Benchmark
Evaluate three types of task shifting with popular continual learning algorithms.
When building AI models that learn new tasks sequentially, this project helps evaluate how well different 'continual learning' techniques prevent the model from forgetting what it learned previously. You feed in various continual learning algorithms and datasets with shifting tasks, and it outputs performance metrics showing how robust each algorithm is to forgetting. This is for AI researchers and machine learning engineers developing or testing new AI systems that need to adapt over time.
523 stars. No commits in the last 6 months.
Use this if you are developing AI models that need to continuously learn new information without forgetting old knowledge and you want to benchmark different algorithms for this problem.
Not ideal if you are looking for a pre-trained model or a simple API to integrate into an existing application, rather than a research and evaluation framework.
Stars
523
Forks
89
Language
Python
License
MIT
Category
Last pushed
Apr 26, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/GT-RIPL/Continual-Learning-Benchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aimagelab/mammoth
An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of...
LAMDA-CL/PyCIL
PyCIL: A Python Toolbox for Class-Incremental Learning
GMvandeVen/continual-learning
PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR,...
LAMDA-CL/LAMDA-PILOT
🎉 PILOT: A Pre-trained Model-Based Continual Learning Toolbox
mmasana/FACIL
Framework for Analysis of Class-Incremental Learning with 12 state-of-the-art methods and 3 baselines.