adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

72
/ 100
Verified

This library helps machine learning engineers and researchers fine-tune large language models (LLMs) more efficiently. It takes a pre-trained Transformer model and task-specific datasets, then allows you to add and train small, specialized 'adapter' modules. The output is a highly optimized model for specific NLP tasks like text classification or question answering, without needing to retrain the entire large model.

2,802 stars. Used by 2 other packages. Actively maintained with 1 commit in the last 30 days. Available on PyPI.

Use this if you need to adapt powerful Transformer models for various natural language processing tasks without incurring the high computational cost and storage requirements of full model fine-tuning.

Not ideal if you are a non-developer or practitioner looking for a no-code solution, as this requires familiarity with Python and machine learning frameworks.

natural-language-processing machine-learning-engineering deep-learning-research model-optimization transfer-learning
Maintenance 13 / 25
Adoption 12 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

2,802

Forks

375

Language

Python

License

Apache-2.0

Last pushed

Mar 01, 2026

Commits (30d)

1

Dependencies

2

Reverse dependents

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/adapter-hub/adapters"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.