adapters and awesome-adapter-resources
The first is a production-ready PyTorch library implementing multiple parameter-efficient adapter architectures (LoRA, prefix tuning, etc.), while the second is a curated reference collection documenting adapter methods and research—making them complementary resources where practitioners use the library while researchers consult the overview.
About adapters
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
This library helps machine learning engineers and researchers fine-tune large language models (LLMs) more efficiently. It takes a pre-trained Transformer model and task-specific datasets, then allows you to add and train small, specialized 'adapter' modules. The output is a highly optimized model for specific NLP tasks like text classification or question answering, without needing to retrain the entire large model.
About awesome-adapter-resources
calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
This resource helps machine learning practitioners efficiently adapt large, pre-trained neural networks like Transformers for new tasks. It provides a curated collection of tools and research papers on 'adapter' methods, which allow you to customize these large models without the extensive computational cost of full fine-tuning. This is ideal for AI engineers, data scientists, and ML researchers who work with large language models, computer vision models, or audio processing models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work