necrashter/transformers-learnable-memory
Fine-tuning Image Transformers using Learnable Memory
This project helps machine learning engineers and researchers fine-tune existing image classification models to new tasks without losing performance on previous tasks. You input a pre-trained Vision Transformer model and a new image dataset for fine-tuning. The output is a modified model that performs well on both the original task and the new task, effectively preventing "catastrophic forgetting."
No commits in the last 6 months.
Use this if you need to adapt a powerful pre-trained image transformer model to multiple specialized image classification problems sequentially, without having to retrain from scratch or manage many separate models.
Not ideal if you are starting a new image classification model from scratch or only ever need to train on a single dataset.
Stars
8
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/necrashter/transformers-learnable-memory"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ThilinaRajapakse/simpletransformers
Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling,...
jsksxs360/How-to-use-Transformers
Transformers 库快速入门教程
google/deepconsensus
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences...
Denis2054/Transformers-for-NLP-2nd-Edition
Transformer models from BERT to GPT-4, environments from Hugging Face to OpenAI. Fine-tuning,...
abhimishra91/transformers-tutorials
Github repo with tutorials to fine tune transformers for diff NLP tasks