experiencor/deep-viz-keras
Implementations of some popular Saliency Maps in Keras
This project helps machine learning engineers and researchers understand why a Convolutional Neural Network (CNN) makes a specific prediction from an image. You feed in an image and a trained CNN model, and it outputs a 'saliency map' – a visual overlay that highlights the most important parts of the image that led to the model's decision. This is for anyone who needs to interpret or debug their image classification models.
166 stars. No commits in the last 6 months.
Use this if you need to visualize which regions of an input image are most influential in your Keras CNN's classification of that image.
Not ideal if you are working with non-image data or require interpretability methods for models other than Keras CNNs.
Stars
166
Forks
30
Language
Jupyter Notebook
License
—
Category
Last pushed
May 11, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/experiencor/deep-viz-keras"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jacobgil/pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers,...
frgfm/torch-cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++,...
jacobgil/keras-grad-cam
An implementation of Grad-CAM with keras
ramprs/grad-cam
[ICCV 2017] Torch code for Grad-CAM
matlab-deep-learning/Explore-Deep-Network-Explainability-Using-an-App
This repository provides an app for exploring the predictions of an image classification network...