ndb796/PyTorch-Adversarial-Attack-Baselines-for-ImageNet-CIFAR10-MNIST
PyTorch adversarial attack baselines for ImageNet, CIFAR10, and MNIST (state-of-the-art attacks comparison)
This project helps machine learning engineers and researchers assess the robustness of their image classification models against adversarial attacks. It takes an image classification model and a dataset (like ImageNet, CIFAR10, or MNIST) as input. It then applies various state-of-the-art adversarial attack techniques and outputs metrics showing how successful these attacks are at fooling the model, providing insights into model vulnerabilities.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher developing image classification models and need to evaluate their resilience against various adversarial attacks.
Not ideal if you are a data scientist or analyst looking for general image processing or feature engineering tools, or if you're not specifically working on model robustness.
Stars
20
Forks
5
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 12, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ndb796/PyTorch-Adversarial-Attack-Baselines-for-ImageNet-CIFAR10-MNIST"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research