dedeswim/vits-robustness-torch
Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]
This project helps machine learning researchers improve the resilience of their Vision Transformer (ViT) models against adversarial attacks. It provides a specialized training approach that takes image datasets as input and outputs more robust ViT models capable of maintaining performance even when faced with subtle, malicious alterations to input images. Machine learning scientists and researchers focused on model security would use this.
No commits in the last 6 months.
Use this if you are a machine learning researcher or practitioner aiming to build Vision Transformer models that are robust to adversarial attacks and want to explore a 'light' training recipe for improved security.
Not ideal if your primary goal is to maximize general classification accuracy without a specific focus on adversarial robustness or if you are working exclusively with Convolutional Neural Networks.
Stars
54
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dedeswim/vits-robustness-torch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...