jacobgil/pytorch-grad-cam

Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

60
/ 100
Established

This helps data scientists, machine learning engineers, and researchers understand why their computer vision AI models make specific decisions. You input a trained image classification, object detection, or segmentation model, and it outputs visual heatmaps showing the exact regions of an image that influenced the model's prediction. This allows users to diagnose model errors, build trust in AI systems, and improve model performance.

12,682 stars. Used by 3 other packages. No commits in the last 6 months. Available on PyPI.

Use this if you need to visually interpret the internal workings of your computer vision models to explain their predictions.

Not ideal if you are looking for explainability methods for non-visual AI models or tabular data.

AI-explainability computer-vision model-debugging machine-learning-operations deep-learning-research
Stale 6m
Maintenance 0 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

12,682

Forks

1,694

Language

Python

License

MIT

Last pushed

Apr 07, 2025

Commits (30d)

0

Dependencies

9

Reverse dependents

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jacobgil/pytorch-grad-cam"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.