jacobgil/vit-explain
Explainability for Vision Transformers
This tool helps you understand what parts of an image an AI vision model focuses on when making a decision. You input an image and get a heatmap that highlights the most important areas, showing either general attention or attention related to a specific object category. This is useful for AI engineers, researchers, or anyone debugging or validating image classification models.
1,072 stars. No commits in the last 6 months.
Use this if you need to visualize and interpret why a Vision Transformer model made a particular classification for an image.
Not ideal if you are working with non-image data or need to explain models other than Vision Transformers.
Stars
1,072
Forks
109
Language
Python
License
MIT
Last pushed
Mar 12, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jacobgil/vit-explain"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...