gaussalgo/adaptor
ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or custom objective(s).
This tool helps language model users improve their models' understanding and performance on specific types of text or tasks. It takes an existing language model and your specialized text data, then trains the model to better handle your unique domain (e.g., medical jargon, legal documents) or task (e.g., sentiment analysis, entity recognition). This results in a more accurate and robust language model tailored to your specific needs, ideal for data scientists or NLP practitioners.
No commits in the last 6 months. Available on PyPI.
Use this if you need to adapt a pre-trained language model to perform better on your niche data domain or a specific language understanding task, especially if you have multiple training objectives.
Not ideal if you're looking for a simple, out-of-the-box solution for a common language task without any custom adaptation or multi-objective training needs.
Stars
28
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 28, 2025
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gaussalgo/adaptor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language...
intersun/LightningDOT
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT
kyegomez/M2PT
Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with...
calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning