adapters and efficient-task-transfer
The first is a comprehensive production-ready library for implementing parameter-efficient adapters across multiple model architectures, while the second is a research codebase that uses adapter-based methods to solve the upstream problem of selecting optimal intermediate tasks for pretraining—making them complementary tools where the research code could benefit from or inform usage of the adapter library.
About adapters
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
This library helps machine learning engineers and researchers fine-tune large language models (LLMs) more efficiently. It takes a pre-trained Transformer model and task-specific datasets, then allows you to add and train small, specialized 'adapter' modules. The output is a highly optimized model for specific NLP tasks like text classification or question answering, without needing to retrain the entire large model.
About efficient-task-transfer
adapter-hub/efficient-task-transfer
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021
This tool helps machine learning engineers and researchers efficiently train and evaluate Transformer and Adapter models for various Natural Language Understanding (NLU) tasks. It streamlines the process of experimenting with different pre-trained models and fine-tuning strategies. You provide task configurations and desired models, and the tool handles data preprocessing, training, evaluation, and optional notifications.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work