TaoYang225/AD-DROP

Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"

18
/ 100
Experimental

This project helps machine learning engineers and researchers fine-tune large language models for specific tasks like sentiment analysis, natural language inference, or question answering. It takes pre-trained language models and task-specific datasets as input, and outputs a more robust, fine-tuned model less susceptible to small input changes. This is for professionals building or deploying natural language processing (NLP) systems.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to improve the robustness and reliability of your fine-tuned language models on tasks like text classification, named entity recognition, or machine translation.

Not ideal if you are an end-user without a machine learning background, or if you need to fine-tune models other than BERT, RoBERTa, ELECTRA, or OPUS-MT series.

natural-language-processing machine-learning-engineering model-robustness text-classification named-entity-recognition
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

23

Forks

1

Language

Python

License

Last pushed

Oct 12, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TaoYang225/AD-DROP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.