kirill-vish/Beyond-INet
Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"
This project helps computer vision practitioners analyze and compare different image classification models beyond simple accuracy scores. It takes trained ConvNet or Vision Transformer models (either supervised or CLIP-trained) as input, and outputs detailed evaluations on aspects like robustness, calibration, and shape/texture bias. This is for researchers and engineers who need to select the most suitable vision model for specialized applications, not just general image recognition.
102 stars. No commits in the last 6 months.
Use this if you need to thoroughly evaluate and compare the nuanced performance of different image classification models for specific real-world computer vision tasks.
Not ideal if you are looking for an off-the-shelf, plug-and-play image classification model for general purposes without needing in-depth comparative analysis.
Stars
102
Forks
5
Language
Python
License
MIT
Category
Last pushed
Sep 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kirill-vish/Beyond-INet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative...
Affirm/splinator
Splinator: probabilistic calibration with regression splines