Harry24k/FGSM-pytorch

A pytorch implementation of "Explaining and harnessing adversarial examples"

43
/ 100
Emerging

This project helps machine learning engineers or researchers understand how to make small, imperceptible changes to an image that cause a deep learning model to misclassify it. It takes an input image and a pre-trained image classification model (like Inception v3), and outputs an "adversarial" image designed to trick the model. The primary users are those working on the security and robustness of AI models.

No commits in the last 6 months.

Use this if you are a machine learning practitioner experimenting with adversarial attacks to test the vulnerability of your image classification models.

Not ideal if you are looking for a comprehensive toolkit for various adversarial attack methods or a maintained library for model defense strategies.

AI security model robustness computer vision deep learning research adversarial examples
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

70

Forks

16

Language

Jupyter Notebook

License

MIT

Last pushed

Sep 04, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Harry24k/FGSM-pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.