csinva/hierarchical-dnn-interpretations

Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

44
/ 100
Emerging

When a neural network makes a prediction, it's often hard to understand why. This tool takes a single prediction from a PyTorch neural network and reveals the hierarchical importance of different input features to that specific outcome. It helps machine learning engineers and researchers to see which parts of an image, text, or other data were most influential, from broad strokes down to fine details.

129 stars. No commits in the last 6 months.

Use this if you need to understand the reasoning behind an individual prediction from your PyTorch neural network, rather than just knowing what the network predicted.

Not ideal if you are looking for a general explanation of how your model works across all predictions, or if you are not working with PyTorch neural networks.

model-interpretability neural-network-analysis machine-learning-debugging prediction-explanation AI-explainability
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

129

Forks

22

Language

Jupyter Notebook

License

MIT

Last pushed

Aug 25, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csinva/hierarchical-dnn-interpretations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.