necrashter/transformers-learnable-memory

Fine-tuning Image Transformers using Learnable Memory

20
/ 100
Experimental

This project helps machine learning engineers and researchers fine-tune existing image classification models to new tasks without losing performance on previous tasks. You input a pre-trained Vision Transformer model and a new image dataset for fine-tuning. The output is a modified model that performs well on both the original task and the new task, effectively preventing "catastrophic forgetting."

No commits in the last 6 months.

Use this if you need to adapt a powerful pre-trained image transformer model to multiple specialized image classification problems sequentially, without having to retrain from scratch or manage many separate models.

Not ideal if you are starting a new image classification model from scratch or only ever need to train on a single dataset.

deep-learning computer-vision transfer-learning image-classification model-adaptation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/necrashter/transformers-learnable-memory"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.