KentaItakura/Explainable-AI-interpreting-the-classification-performed-by-deep-learning-with-LIME-using-MATLAB

This demo shows how to interpret the classification by CNN using LIME (Local Interpretable Model-agnostic Explanations)

27
/ 100
Experimental

This tool helps scientists, engineers, and researchers understand *why* a deep learning model classified an image the way it did. You input an image and a classification from a deep learning model, and it outputs a visual overlay on the original image, highlighting the specific regions that most influenced that classification. This is ideal for anyone working with image classification who needs to verify model trustworthiness or diagnose model errors.

No commits in the last 6 months.

Use this if you need to explain or interpret the decision-making process of a Convolutional Neural Network (CNN) for image classification.

Not ideal if you are working with non-image data or deep learning models other than CNNs, or if you require an explanation method beyond LIME.

deep-learning-interpretability image-analysis model-explanation computer-vision AI-trust-verification
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

MATLAB

License

BSD-3-Clause

Last pushed

Dec 06, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/KentaItakura/Explainable-AI-interpreting-the-classification-performed-by-deep-learning-with-LIME-using-MATLAB"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.