thunlp/RobTest

Source code for ACL 2023 Findings paper "From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework"

28
/ 100
Experimental

This toolkit helps you test how robust your natural language processing (NLP) models are against minor, misleading changes in text. It takes your existing text classification model and a dataset, then generates slightly altered versions of the input text to see if your model still gives the correct answer. This is for AI/ML engineers, NLP researchers, or anyone building and deploying text-based AI models who needs to ensure their models are reliable and not easily tricked by adversarial text.

No commits in the last 6 months.

Use this if you need to systematically evaluate the reliability and 'trick-proof' nature of your text-based AI models by simulating various textual attacks.

Not ideal if you are looking for a tool to train NLP models or to perform general text data augmentation, rather than focused robustness evaluation.

NLP-model-evaluation text-classification AI-model-robustness adversarial-testing language-model-security
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

MIT

Last pushed

Jun 15, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/RobTest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.