thunlp/SememePSO-Attack
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"
This project helps evaluate the robustness of natural language processing (NLP) models by creating subtle, human-imperceptible changes to text that can fool these models. It takes original text inputs and an NLP model, and outputs altered versions of the text designed to cause misclassification by the model. This is for NLP researchers, data scientists, or developers who build and deploy text-based AI systems.
No commits in the last 6 months.
Use this if you need to test how easily your NLP models can be tricked by minimal text changes, helping you understand their vulnerabilities and improve their resilience.
Not ideal if you are looking to generate diverse paraphrases or creative text variations for general content generation rather than adversarial testing.
Stars
88
Forks
14
Language
Python
License
MIT
Category
Last pushed
Apr 11, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/SememePSO-Attack"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/HiddenKiller
Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks...