vtu81/backdoor_attack

Applying backdoor attacks to BadNet on MNIST and ResNet on CIFAR10.

32
/ 100
Emerging

This project helps machine learning security researchers and adversarial AI specialists understand and demonstrate 'backdoor attacks' on neural networks. It takes common image datasets (like MNIST and CIFAR10) and applies specific visual triggers to a portion of the training data. The output is a compromised deep learning model that misclassifies images containing the hidden trigger, along with visualizations of how these triggers appear.

No commits in the last 6 months.

Use this if you need to research or demonstrate how subtle, pre-planned triggers in training data can manipulate a deployed image classification model.

Not ideal if you are looking for a general-purpose tool to detect existing backdoors or to harden models against a wide range of adversarial attacks.

AI Security Adversarial Machine Learning Model Vulnerability Deep Learning Image Classification
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Aug 25, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vtu81/backdoor_attack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.