inouye-lab/ShapleyExplanationNetworks

Implementation of the paper "Shapley Explanation Networks"

42
/ 100
Emerging

This project offers a unique way to understand how machine learning models make their predictions. It takes your existing PyTorch neural network model and helps you get explanations for why it arrived at a particular output. This is for machine learning engineers or researchers who need to interpret the contributions of different input features or hidden layers to a model's final decision.

No commits in the last 6 months.

Use this if you need to interpret the decisions of your PyTorch-based neural networks and understand the contribution of individual features or parts of the network to the output.

Not ideal if you are looking for a black-box explanation tool for models not built with PyTorch, or if you don't have a background in deep learning model development.

model-interpretability explainable-ai deep-learning-analysis neural-network-debugging feature-attribution
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

88

Forks

15

Language

Python

License

MIT

Last pushed

Jan 16, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/inouye-lab/ShapleyExplanationNetworks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.