sandareka/Interpretability-of-Machine-Learning-Visualizations
Interpretability of Machine Learning-Visualizations
When a machine learning model classifies an image, this tool helps you understand *why* it made that decision. You provide an image and the model's classification, and it highlights the specific areas of the image that were most important for that classification. This is useful for anyone who needs to verify, explain, or debug image classification models, such as AI researchers or quality assurance specialists.
No commits in the last 6 months.
Use this if you need to visualize which parts of an image an image classification model focused on when making its decision.
Not ideal if you are working with non-image data or require explanations for model types other than image classifiers.
Stars
13
Forks
2
Language
Python
License
—
Last pushed
Jul 09, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sandareka/Interpretability-of-Machine-Learning-Visualizations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...