calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
This resource helps machine learning practitioners efficiently adapt large, pre-trained neural networks like Transformers for new tasks. It provides a curated collection of tools and research papers on 'adapter' methods, which allow you to customize these large models without the extensive computational cost of full fine-tuning. This is ideal for AI engineers, data scientists, and ML researchers who work with large language models, computer vision models, or audio processing models.
202 stars. No commits in the last 6 months.
Use this if you need to fine-tune large pre-trained models for various tasks with significantly reduced computational resources, storage, and deployment complexity compared to full model fine-tuning.
Not ideal if you are working with small models that don't benefit from parameter-efficient techniques or if you require full fine-tuning for maximum performance and have ample computational resources.
Stars
202
Forks
9
Language
Python
License
ISC
Category
Last pushed
May 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/calpt/awesome-adapter-resources"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
gaussalgo/adaptor
ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or...
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language...
intersun/LightningDOT
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT
kyegomez/M2PT
Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with...