adapters and torch-adapters
The adapter-hub/adapters library is a comprehensive, production-ready framework for parameter-efficient transfer learning across multiple model architectures, while torch-adapters is a minimal PyTorch implementation of adapter modules that could serve as a lightweight alternative or educational reference rather than a complement to the more established ecosystem.
About adapters
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
This library helps machine learning engineers and researchers fine-tune large language models (LLMs) more efficiently. It takes a pre-trained Transformer model and task-specific datasets, then allows you to add and train small, specialized 'adapter' modules. The output is a highly optimized model for specific NLP tasks like text classification or question answering, without needing to retrain the entire large model.
About torch-adapters
ma2za/torch-adapters
Small Library of PyTorch Adaptation modules
This library helps machine learning practitioners fine-tune large pre-trained models more efficiently. It takes an existing PyTorch model and applies various adaptation techniques like LoRA or Prompt Tuning, resulting in a model that is specialized for a new task without requiring extensive re-training or storage. Data scientists, ML engineers, or researchers working with large language or vision models can use this.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work