sayakpaul/robustness-vit
Contains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
This project helps machine learning researchers and practitioners understand why Vision Transformers (ViT) are more robust than traditional Convolutional Neural Networks (CNNs) when processing images. By analyzing various ImageNet datasets, it provides quantitative and qualitative evidence to show how ViTs maintain performance even with corrupted or perturbed image inputs. Researchers can use this to guide model selection and development for robust computer vision applications.
122 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer interested in the robustness of computer vision models against real-world image distortions and want to explore the underlying reasons for Vision Transformers' superior performance.
Not ideal if you are looking for a ready-to-use application or a simple Python library for general image classification without a deep interest in model robustness research.
Stars
122
Forks
18
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 03, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sayakpaul/robustness-vit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Kohulan/DECIMER-Image_Transformer
DECIMER Image Transformer is a deep-learning-based tool designed for automated recognition of...
sovit-123/vision_transformers
Vision Transformers for image classification, image segmentation, and object detection.
fcakyon/video-transformers
Easiest way of fine-tuning HuggingFace video classification models
leaderj1001/BottleneckTransformers
Bottleneck Transformers for Visual Recognition
qubvel/transformers-notebooks
Inference and fine-tuning examples for vision models from 🤗 Transformers