jacobgil/vit-explain

Explainability for Vision Transformers

45
/ 100
Emerging

This tool helps you understand what parts of an image an AI vision model focuses on when making a decision. You input an image and get a heatmap that highlights the most important areas, showing either general attention or attention related to a specific object category. This is useful for AI engineers, researchers, or anyone debugging or validating image classification models.

1,072 stars. No commits in the last 6 months.

Use this if you need to visualize and interpret why a Vision Transformer model made a particular classification for an image.

Not ideal if you are working with non-image data or need to explain models other than Vision Transformers.

AI explainability computer vision model interpretation image classification deep learning research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,072

Forks

109

Language

Python

License

MIT

Last pushed

Mar 12, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jacobgil/vit-explain"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.