grypesc/SEED

ICLR2024 paper on Continual Learning

39
/ 100
Emerging

This project helps machine learning researchers evaluate methods for 'continual learning,' where a model learns new information over time without forgetting previously learned concepts. It takes standard image datasets, like CIFAR100 or ImageNet, and outputs performance metrics showing how well models adapt to new tasks while retaining old knowledge. Researchers focused on developing robust and adaptive AI systems would find this useful.

No commits in the last 6 months.

Use this if you are a machine learning researcher evaluating continual learning algorithms, especially in image classification tasks.

Not ideal if you are looking for a pre-trained model to deploy in an application or if your primary interest is in domains other than image classification.

Machine Learning Research Continual Learning Image Classification AI Model Evaluation Deep Learning Benchmarking
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

36

Forks

7

Language

Python

License

MIT

Last pushed

Apr 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/grypesc/SEED"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.