ma2za/torch-adapters
Small Library of PyTorch Adaptation modules
This library helps machine learning practitioners fine-tune large pre-trained models more efficiently. It takes an existing PyTorch model and applies various adaptation techniques like LoRA or Prompt Tuning, resulting in a model that is specialized for a new task without requiring extensive re-training or storage. Data scientists, ML engineers, or researchers working with large language or vision models can use this.
No commits in the last 6 months. Available on PyPI.
Use this if you need to adapt large pre-trained PyTorch models to new tasks or datasets without the computational cost and storage requirements of full fine-tuning.
Not ideal if you are building a model from scratch or performing full-scale fine-tuning where parameter efficiency is not a primary concern.
Stars
9
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jul 28, 2023
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ma2za/torch-adapters"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
gaussalgo/adaptor
ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or...
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language...
intersun/LightningDOT
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT
kyegomez/M2PT
Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with...