ndb796/PyTorch-Adversarial-Attack-Baselines-for-ImageNet-CIFAR10-MNIST

PyTorch adversarial attack baselines for ImageNet, CIFAR10, and MNIST (state-of-the-art attacks comparison)

29
/ 100
Experimental

This project helps machine learning engineers and researchers assess the robustness of their image classification models against adversarial attacks. It takes an image classification model and a dataset (like ImageNet, CIFAR10, or MNIST) as input. It then applies various state-of-the-art adversarial attack techniques and outputs metrics showing how successful these attacks are at fooling the model, providing insights into model vulnerabilities.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher developing image classification models and need to evaluate their resilience against various adversarial attacks.

Not ideal if you are a data scientist or analyst looking for general image processing or feature engineering tools, or if you're not specifically working on model robustness.

machine-learning-security model-robustness image-classification adversarial-ai deep-learning-evaluation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

20

Forks

5

Language

Jupyter Notebook

License

Last pushed

Mar 12, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ndb796/PyTorch-Adversarial-Attack-Baselines-for-ImageNet-CIFAR10-MNIST"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.