Allen0307/AdapterBias
Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
This project helps machine learning engineers and researchers fine-tune large language models for specific NLP tasks more efficiently. By introducing a small, token-dependent adjustment, it allows pre-trained models to adapt to new datasets with significantly fewer trainable parameters. You provide a pre-trained language model and a dataset for a specific NLP task, and the output is a fine-tuned model ready for that task.
No commits in the last 6 months.
Use this if you are working with large pre-trained language models and need to adapt them to various downstream NLP tasks (like sentiment analysis, question answering, or text entailment) while minimizing computational resources and memory.
Not ideal if you are not working with Transformer-based language models or if you prioritize maximum model performance over parameter efficiency.
Stars
18
Forks
—
Language
Python
License
—
Category
Last pushed
May 04, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Allen0307/AdapterBias"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
gaussalgo/adaptor
ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or...
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language...
intersun/LightningDOT
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT
kyegomez/M2PT
Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with...