thunlp/HiddenKiller
Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"
This project helps evaluate the security vulnerabilities of Natural Language Processing (NLP) models to 'backdoor attacks.' It takes clean text data and generates 'poisoned' versions by subtly altering sentences with syntactic triggers. The output is data that can be used to train or test NLP models, helping researchers and security engineers assess how susceptible models like BERT or LSTM are to these hidden attacks.
No commits in the last 6 months.
Use this if you are an NLP researcher or security specialist interested in understanding and demonstrating how 'invisible' backdoor attacks can be created and exploited in text-based AI models.
Not ideal if you are looking to defend against general adversarial attacks or to implement robust defenses against known vulnerabilities without first generating attack data.
Stars
43
Forks
9
Language
Python
License
MIT
Category
Last pushed
Sep 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/HiddenKiller"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/SememePSO-Attack
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial...