srvCodes/continual_learning_with_vit

Code for our CVPR 2022 workshop paper "Towards Exemplar-Free Continual Learning in Vision Transformers"

35
/ 100
Emerging

This project helps machine learning researchers improve how Vision Transformers (ViTs) learn new image classification tasks over time without forgetting previously learned information. It provides code implementing new methods for 'exemplar-free continual learning', meaning it doesn't need to store old training examples. Researchers can input a sequence of image classification tasks and output a ViT model that efficiently learns each new task while retaining knowledge from previous ones.

No commits in the last 6 months.

Use this if you are a machine learning researcher or practitioner working with Vision Transformers and need to implement or evaluate advanced continual learning strategies for image classification.

Not ideal if you are looking for an out-of-the-box solution for deploying a continually learning system without deep expertise in machine learning research.

continual-learning image-classification vision-transformers machine-learning-research deep-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

24

Forks

4

Language

Python

License

MIT

Last pushed

Jul 10, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/srvCodes/continual_learning_with_vit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.