dedeswim/vits-robustness-torch

Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]

23
/ 100
Experimental

This project helps machine learning researchers improve the resilience of their Vision Transformer (ViT) models against adversarial attacks. It provides a specialized training approach that takes image datasets as input and outputs more robust ViT models capable of maintaining performance even when faced with subtle, malicious alterations to input images. Machine learning scientists and researchers focused on model security would use this.

No commits in the last 6 months.

Use this if you are a machine learning researcher or practitioner aiming to build Vision Transformer models that are robust to adversarial attacks and want to explore a 'light' training recipe for improved security.

Not ideal if your primary goal is to maximize general classification accuracy without a specific focus on adversarial robustness or if you are working exclusively with Convolutional Neural Networks.

adversarial-robustness computer-vision model-security deep-learning-research image-classification
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

54

Forks

3

Language

Jupyter Notebook

License

Last pushed

Feb 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dedeswim/vits-robustness-torch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.