adapters and efficient-task-transfer

The first is a comprehensive production-ready library for implementing parameter-efficient adapters across multiple model architectures, while the second is a research codebase that uses adapter-based methods to solve the upstream problem of selecting optimal intermediate tasks for pretraining—making them complementary tools where the research code could benefit from or inform usage of the adapter library.

adapters
72
Verified
Maintenance 13/25
Adoption 12/25
Maturity 25/25
Community 22/25
Maintenance 0/25
Adoption 7/25
Maturity 16/25
Community 10/25
Stars: 2,802
Forks: 375
Downloads:
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 37
Forks: 4
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About adapters

adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

This library helps machine learning engineers and researchers fine-tune large language models (LLMs) more efficiently. It takes a pre-trained Transformer model and task-specific datasets, then allows you to add and train small, specialized 'adapter' modules. The output is a highly optimized model for specific NLP tasks like text classification or question answering, without needing to retrain the entire large model.

natural-language-processing machine-learning-engineering deep-learning-research model-optimization transfer-learning

About efficient-task-transfer

adapter-hub/efficient-task-transfer

Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021

This tool helps machine learning engineers and researchers efficiently train and evaluate Transformer and Adapter models for various Natural Language Understanding (NLU) tasks. It streamlines the process of experimenting with different pre-trained models and fine-tuning strategies. You provide task configurations and desired models, and the tool handles data preprocessing, training, evaluation, and optional notifications.

natural-language-processing machine-learning-engineering model-fine-tuning deep-learning-research text-analytics

Scores updated daily from GitHub, PyPI, and npm data. How scores work