tobna/WhatTransformerToFavor
Github repository for the paper Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers.
This project helps machine learning engineers and researchers select the best Vision Transformer model for computer vision tasks like image classification. It takes a Vision Transformer model and image dataset as input, then outputs comprehensive measurements of its accuracy, speed, and memory usage. This allows practitioners to make informed decisions when choosing or developing efficient models.
Use this if you need to objectively compare the efficiency and performance of different Vision Transformer models for image classification under standardized conditions.
Not ideal if you are looking for a pre-trained model to use directly without needing to evaluate its efficiency or compare it to others.
Stars
33
Forks
7
Language
Python
License
MIT
Category
Last pushed
Feb 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tobna/WhatTransformerToFavor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Jittor/jittor
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.
zhanghang1989/ResNeSt
ResNeSt: Split-Attention Networks
berniwal/swin-transformer-pytorch
Implementation of the Swin Transformer in PyTorch.
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with...
ViTAE-Transformer/ViTPose
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose...