Yangyi-Chen/MAYA

Code base for the EMNLP 2021 paper, "Multi-granularity Textual Adversarial Attack with Behavior Cloning".

13
/ 100
Experimental

This tool helps researchers and AI security practitioners test the robustness of natural language processing (NLP) models. You provide a trained NLP model and a dataset of text, and it generates slightly altered versions of that text that can trick the model into making incorrect predictions. This helps you understand how vulnerable your NLP systems are to subtle changes in input.

No commits in the last 6 months.

Use this if you need to evaluate the security and resilience of your text classification or other NLP models against targeted, subtle attacks.

Not ideal if you are looking for a general-purpose data augmentation tool or a method to improve your model's baseline performance.

AI security NLP model testing text adversarial attacks model robustness machine learning research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

Last pushed

Apr 18, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Yangyi-Chen/MAYA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.