gralliry/Adversarial-Attack-Generation-Techniques

Adversarial attack generation techniques for CIFAR10 based on Pytorch: L-BFGS, FGSM, I-FGSM, MI-FGSM, DeepFool, C&W, JSMA, ONE-PIXEL, UPSET

30
/ 100
Emerging

This project helps machine learning engineers and researchers evaluate the robustness of image classification models. It allows you to generate various types of 'adversarial attacks' on images, which are subtle modifications designed to fool a model. You input a trained image classification model and a dataset of images, and it outputs attack examples and the model's accuracy after being attacked.

No commits in the last 6 months.

Use this if you need to understand how vulnerable your image recognition models are to malicious inputs and compare different attack strategies.

Not ideal if you are looking to build a robust defense mechanism against adversarial attacks, as this project focuses on attack generation.

machine-learning-security computer-vision model-robustness adversarial-machine-learning
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

MIT

Last pushed

Sep 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/gralliry/Adversarial-Attack-Generation-Techniques"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.