AI-secure/adversarial-glue
[NeurIPS 2021] "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models" by Boxin Wang*, Chejian Xu*, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li.
This project helps machine learning researchers and natural language processing engineers rigorously test their language models against adversarial attacks. It provides a specialized dataset of text inputs designed to challenge a model's robustness. Researchers can input their language model's predictions on this dataset and receive official scores on both development and hidden test sets, allowing them to benchmark their model's resilience.
No commits in the last 6 months.
Use this if you are developing or evaluating language models and need to assess how robustly they perform when faced with subtly altered or challenging text inputs.
Not ideal if you are looking for a tool to train a language model from scratch or to apply existing models to standard text classification tasks.
Stars
13
Forks
2
Language
Python
License
—
Category
Last pushed
Apr 03, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-secure/adversarial-glue"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research