thunlp/SememePSO-Attack

Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"

41
/ 100
Emerging

This project helps evaluate the robustness of natural language processing (NLP) models by creating subtle, human-imperceptible changes to text that can fool these models. It takes original text inputs and an NLP model, and outputs altered versions of the text designed to cause misclassification by the model. This is for NLP researchers, data scientists, or developers who build and deploy text-based AI systems.

No commits in the last 6 months.

Use this if you need to test how easily your NLP models can be tricked by minimal text changes, helping you understand their vulnerabilities and improve their resilience.

Not ideal if you are looking to generate diverse paraphrases or creative text variations for general content generation rather than adversarial testing.

NLP model testing Adversarial examples Text robustness AI safety Machine learning evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

88

Forks

14

Language

Python

License

MIT

Last pushed

Apr 11, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/SememePSO-Attack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.