vtu81/backdoor_attack
Applying backdoor attacks to BadNet on MNIST and ResNet on CIFAR10.
This project helps machine learning security researchers and adversarial AI specialists understand and demonstrate 'backdoor attacks' on neural networks. It takes common image datasets (like MNIST and CIFAR10) and applies specific visual triggers to a portion of the training data. The output is a compromised deep learning model that misclassifies images containing the hidden trigger, along with visualizations of how these triggers appear.
No commits in the last 6 months.
Use this if you need to research or demonstrate how subtle, pre-planned triggers in training data can manipulate a deployed image classification model.
Not ideal if you are looking for a general-purpose tool to detect existing backdoors or to harden models against a wide range of adversarial attacks.
Stars
13
Forks
2
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Aug 25, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vtu81/backdoor_attack"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...