lazyCodes7/blacbox
Making CNNs interpretable, because accuracy can't cut it anymore:p
This project helps machine learning engineers and researchers understand why their image-based AI models make specific predictions. By analyzing an AI model's output and an input image, it produces visual maps that highlight the most important areas of an image that influenced the AI's decision. This allows practitioners to verify if the AI is focusing on relevant features rather than irrelevant background details.
No commits in the last 6 months.
Use this if you need to debug or build trust in your computer vision models by visually inspecting what parts of an image are most critical to their predictions.
Not ideal if you are looking for tools to interpret non-image-based machine learning models or want highly advanced, cutting-edge interpretability techniques not yet implemented.
Stars
11
Forks
—
Language
Jupyter Notebook
License
MIT
Last pushed
Jul 22, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lazyCodes7/blacbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...