nmndeep/revisiting-at

[NeurIPS 2023] Code for the paper "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models"

23
/ 100
Experimental

This project helps machine learning researchers and practitioners develop image classification models that are highly resistant to adversarial attacks. It provides code and pre-trained models for training robust image classifiers on large datasets like ImageNet. By adjusting architectural components and training schemes, you can achieve models that are less fooled by subtle, malicious changes to input images, making your systems more reliable in real-world applications.

No commits in the last 6 months.

Use this if you are building an image classification system and need to ensure your models are robust against adversarial examples, especially for high-stakes applications.

Not ideal if you are looking for a simple, out-of-the-box solution for basic image classification without concern for adversarial robustness.

ImageNet classification adversarial robustness deep learning security computer vision research model hardening
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

39

Forks

3

Language

Python

License

Last pushed

Dec 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nmndeep/revisiting-at"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.