thunlp/RobTest
Source code for ACL 2023 Findings paper "From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework"
This toolkit helps you test how robust your natural language processing (NLP) models are against minor, misleading changes in text. It takes your existing text classification model and a dataset, then generates slightly altered versions of the input text to see if your model still gives the correct answer. This is for AI/ML engineers, NLP researchers, or anyone building and deploying text-based AI models who needs to ensure their models are reliable and not easily tricked by adversarial text.
No commits in the last 6 months.
Use this if you need to systematically evaluate the reliability and 'trick-proof' nature of your text-based AI models by simulating various textual attacks.
Not ideal if you are looking for a tool to train NLP models or to perform general text data augmentation, rather than focused robustness evaluation.
Stars
8
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jun 15, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/RobTest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/HiddenKiller
Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks...