sayakpaul/robustness-vit

Contains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).

42
/ 100
Emerging

This project helps machine learning researchers and practitioners understand why Vision Transformers (ViT) are more robust than traditional Convolutional Neural Networks (CNNs) when processing images. By analyzing various ImageNet datasets, it provides quantitative and qualitative evidence to show how ViTs maintain performance even with corrupted or perturbed image inputs. Researchers can use this to guide model selection and development for robust computer vision applications.

122 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer interested in the robustness of computer vision models against real-world image distortions and want to explore the underlying reasons for Vision Transformers' superior performance.

Not ideal if you are looking for a ready-to-use application or a simple Python library for general image classification without a deep interest in model robustness research.

computer-vision machine-learning-research model-robustness image-classification deep-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

122

Forks

18

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 03, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sayakpaul/robustness-vit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.