nmndeep/revisiting-at
[NeurIPS 2023] Code for the paper "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models"
This project helps machine learning researchers and practitioners develop image classification models that are highly resistant to adversarial attacks. It provides code and pre-trained models for training robust image classifiers on large datasets like ImageNet. By adjusting architectural components and training schemes, you can achieve models that are less fooled by subtle, malicious changes to input images, making your systems more reliable in real-world applications.
No commits in the last 6 months.
Use this if you are building an image classification system and need to ensure your models are robust against adversarial examples, especially for high-stakes applications.
Not ideal if you are looking for a simple, out-of-the-box solution for basic image classification without concern for adversarial robustness.
Stars
39
Forks
3
Language
Python
License
—
Category
Last pushed
Dec 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nmndeep/revisiting-at"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...