thunlp/HiddenKiller

Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"

41
/ 100
Emerging

This project helps evaluate the security vulnerabilities of Natural Language Processing (NLP) models to 'backdoor attacks.' It takes clean text data and generates 'poisoned' versions by subtly altering sentences with syntactic triggers. The output is data that can be used to train or test NLP models, helping researchers and security engineers assess how susceptible models like BERT or LSTM are to these hidden attacks.

No commits in the last 6 months.

Use this if you are an NLP researcher or security specialist interested in understanding and demonstrating how 'invisible' backdoor attacks can be created and exploited in text-based AI models.

Not ideal if you are looking to defend against general adversarial attacks or to implement robust defenses against known vulnerabilities without first generating attack data.

NLP-security adversarial-AI text-classification model-vulnerability AI-safety
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

43

Forks

9

Language

Python

License

MIT

Last pushed

Sep 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/HiddenKiller"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.