arm-on/interpretable-image-classification

Interpretability methods applied on image classifiers trained on MNIST and CIFAR10

18
/ 100
Experimental

This project helps machine learning researchers and practitioners understand why an image classification model makes a particular decision. It takes pre-trained image classifiers and visualizes which parts of an input image (like a handwritten digit or an animal photo) are most important for the model's prediction. The output helps users interpret the 'reasoning' behind the classification.

No commits in the last 6 months.

Use this if you need to evaluate and compare different interpretability techniques for deep learning image classifiers, especially for digit or object recognition tasks.

Not ideal if you are working with non-image data, require interpretability for models other than deep neural networks, or need a production-ready interpretability library.

Machine Learning Research Image Recognition Model Explainability Deep Learning Interpretation Computer Vision Debugging
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

24

Forks

1

Language

Jupyter Notebook

License

Last pushed

Oct 17, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/arm-on/interpretable-image-classification"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.