Yangyi-Chen/MAYA
Code base for the EMNLP 2021 paper, "Multi-granularity Textual Adversarial Attack with Behavior Cloning".
This tool helps researchers and AI security practitioners test the robustness of natural language processing (NLP) models. You provide a trained NLP model and a dataset of text, and it generates slightly altered versions of that text that can trick the model into making incorrect predictions. This helps you understand how vulnerable your NLP systems are to subtle changes in input.
No commits in the last 6 months.
Use this if you need to evaluate the security and resilience of your text classification or other NLP models against targeted, subtle attacks.
Not ideal if you are looking for a general-purpose data augmentation tool or a method to improve your model's baseline performance.
Stars
13
Forks
—
Language
Python
License
—
Category
Last pushed
Apr 18, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Yangyi-Chen/MAYA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/HiddenKiller
Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks...