gallilmaimon/LUNATC

This is the official implementation of "A Universal Adversarial Policy for Text Classifiers", Neural Networks (2022), https://doi.org/10.1016/j.neunet.2022.06.018

21
/ 100
Experimental

This project helps machine learning researchers evaluate the robustness of text classifiers against adversarial attacks. It takes an existing text classification model and a dataset as input. It then generates slightly modified text inputs that aim to fool the classifier while remaining semantically similar to the original, producing metrics on how susceptible the classifier is to these attacks. Researchers in natural language processing or machine learning who are focused on model security and resilience would find this useful.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer developing text classifiers and need to thoroughly test their vulnerability to sophisticated adversarial text attacks.

Not ideal if you are an end-user looking for a simple tool to cleanse or preprocess text for classification, or if you are not familiar with machine learning model evaluation.

natural-language-processing machine-learning-security model-robustness adversarial-testing text-classification
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

MIT

Last pushed

Aug 23, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gallilmaimon/LUNATC"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.