experiencor/deep-viz-keras

Implementations of some popular Saliency Maps in Keras

37
/ 100
Emerging

This project helps machine learning engineers and researchers understand why a Convolutional Neural Network (CNN) makes a specific prediction from an image. You feed in an image and a trained CNN model, and it outputs a 'saliency map' – a visual overlay that highlights the most important parts of the image that led to the model's decision. This is for anyone who needs to interpret or debug their image classification models.

166 stars. No commits in the last 6 months.

Use this if you need to visualize which regions of an input image are most influential in your Keras CNN's classification of that image.

Not ideal if you are working with non-image data or require interpretability methods for models other than Keras CNNs.

deep-learning-interpretability computer-vision model-debugging image-classification neural-network-explanation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

166

Forks

30

Language

Jupyter Notebook

License

Last pushed

May 11, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/experiencor/deep-viz-keras"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.