tobna/WhatTransformerToFavor

Github repository for the paper Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers.

49
/ 100
Emerging

This project helps machine learning engineers and researchers select the best Vision Transformer model for computer vision tasks like image classification. It takes a Vision Transformer model and image dataset as input, then outputs comprehensive measurements of its accuracy, speed, and memory usage. This allows practitioners to make informed decisions when choosing or developing efficient models.

Use this if you need to objectively compare the efficiency and performance of different Vision Transformer models for image classification under standardized conditions.

Not ideal if you are looking for a pre-trained model to use directly without needing to evaluate its efficiency or compare it to others.

computer-vision image-classification model-evaluation machine-learning-research deep-learning-optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

33

Forks

7

Language

Python

License

MIT

Last pushed

Feb 25, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tobna/WhatTransformerToFavor"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.