inouye-lab/ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
This project offers a unique way to understand how machine learning models make their predictions. It takes your existing PyTorch neural network model and helps you get explanations for why it arrived at a particular output. This is for machine learning engineers or researchers who need to interpret the contributions of different input features or hidden layers to a model's final decision.
No commits in the last 6 months.
Use this if you need to interpret the decisions of your PyTorch-based neural networks and understand the contribution of individual features or parts of the network to the output.
Not ideal if you are looking for a black-box explanation tool for models not built with PyTorch, or if you don't have a background in deep learning model development.
Stars
88
Forks
15
Language
Python
License
MIT
Last pushed
Jan 16, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/inouye-lab/ShapleyExplanationNetworks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...