csinva/hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
When a neural network makes a prediction, it's often hard to understand why. This tool takes a single prediction from a PyTorch neural network and reveals the hierarchical importance of different input features to that specific outcome. It helps machine learning engineers and researchers to see which parts of an image, text, or other data were most influential, from broad strokes down to fine details.
129 stars. No commits in the last 6 months.
Use this if you need to understand the reasoning behind an individual prediction from your PyTorch neural network, rather than just knowing what the network predicted.
Not ideal if you are looking for a general explanation of how your model works across all predictions, or if you are not working with PyTorch neural networks.
Stars
129
Forks
22
Language
Jupyter Notebook
License
MIT
Last pushed
Aug 25, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csinva/hierarchical-dnn-interpretations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...