imrahulr/adversarial_robustness_pytorch

Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples" & "Fixing Data Augmentation to Improve Adversarial Robustness" in PyTorch

39
/ 100
Emerging

This project helps machine learning researchers and practitioners evaluate and improve the security of their image classification models. It takes an existing image classification model and training data, and then it can either train the model to be more resistant to 'adversarial attacks' or test how well it performs against various attack methods. The output is a more robust model or metrics showing its resilience.

No commits in the last 6 months.

Use this if you are developing or deploying image recognition systems and need to ensure they are not easily fooled by subtle, malicious changes to input images.

Not ideal if you are not working with image data, or if your primary concern is model accuracy without considering adversarial robustness.

AI model security image classification machine learning research computer vision model robustness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

99

Forks

12

Language

Python

License

MIT

Last pushed

Mar 04, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/imrahulr/adversarial_robustness_pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.