sandareka/Interpretability-of-Machine-Learning-Visualizations

Interpretability of Machine Learning-Visualizations

24
/ 100
Experimental

When a machine learning model classifies an image, this tool helps you understand *why* it made that decision. You provide an image and the model's classification, and it highlights the specific areas of the image that were most important for that classification. This is useful for anyone who needs to verify, explain, or debug image classification models, such as AI researchers or quality assurance specialists.

No commits in the last 6 months.

Use this if you need to visualize which parts of an image an image classification model focused on when making its decision.

Not ideal if you are working with non-image data or require explanations for model types other than image classifiers.

image-classification model-explanation computer-vision AI-explainability deep-learning-debugging
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

Last pushed

Jul 09, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sandareka/Interpretability-of-Machine-Learning-Visualizations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.