gsurma/cnn_explainer

Making CNNs interpretable.

30
/ 100
Emerging

This project helps you understand why an image classification model made a specific decision. You provide an image and your trained image classification model, and it generates visual explanations like heatmaps or feature visualizations. This is useful for AI/ML practitioners, researchers, or data scientists who need to audit or explain the behavior of their computer vision models.

No commits in the last 6 months.

Use this if you need to visualize and interpret the decision-making process of your Convolutional Neural Networks for image classification tasks.

Not ideal if you are working with other types of neural networks or data beyond images, or if you need to optimize model performance rather than explain it.

AI explainability computer vision model interpretation image classification machine learning auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 09, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/gsurma/cnn_explainer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.