TaoYang225/AD-DROP
Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"
This project helps machine learning engineers and researchers fine-tune large language models for specific tasks like sentiment analysis, natural language inference, or question answering. It takes pre-trained language models and task-specific datasets as input, and outputs a more robust, fine-tuned model less susceptible to small input changes. This is for professionals building or deploying natural language processing (NLP) systems.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to improve the robustness and reliability of your fine-tuned language models on tasks like text classification, named entity recognition, or machine translation.
Not ideal if you are an end-user without a machine learning background, or if you need to fine-tune models other than BERT, RoBERTa, ELECTRA, or OPUS-MT series.
Stars
23
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 12, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TaoYang225/AD-DROP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
gaussalgo/adaptor
ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or...
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language...
intersun/LightningDOT
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT
kyegomez/M2PT
Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with...