adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
This library helps machine learning engineers and researchers fine-tune large language models (LLMs) more efficiently. It takes a pre-trained Transformer model and task-specific datasets, then allows you to add and train small, specialized 'adapter' modules. The output is a highly optimized model for specific NLP tasks like text classification or question answering, without needing to retrain the entire large model.
2,802 stars. Used by 2 other packages. Actively maintained with 1 commit in the last 30 days. Available on PyPI.
Use this if you need to adapt powerful Transformer models for various natural language processing tasks without incurring the high computational cost and storage requirements of full model fine-tuning.
Not ideal if you are a non-developer or practitioner looking for a no-code solution, as this requires familiarity with Python and machine learning frameworks.
Stars
2,802
Forks
375
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 01, 2026
Commits (30d)
1
Dependencies
2
Reverse dependents
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/adapter-hub/adapters"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
gaussalgo/adaptor
ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or...
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language...
intersun/LightningDOT
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT
kyegomez/M2PT
Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with...
calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning