suinleelab/path_explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
This tool helps machine learning engineers and researchers understand how their deep neural networks make decisions. You feed in your trained deep learning model and a dataset, and it outputs explanations of which input features (like words in a sentence or columns in a table) are most important for the model's predictions, and how these features interact with each other. This is for anyone who needs to debug, validate, or build trust in their deep learning models, particularly those working with tabular or text data.
193 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to explain the reasoning behind predictions from your TensorFlow 2.x deep neural networks, especially for tabular data or transformer-based NLP models.
Not ideal if your model is not a deep neural network, you're not using TensorFlow 2.x, or you need explanations for image-based models without custom adaptations.
Stars
193
Forks
29
Language
Jupyter Notebook
License
MIT
Last pushed
Jan 16, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/suinleelab/path_explain"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...