suinleelab/path_explain

A repository for explaining feature attributions and feature interactions in deep neural networks.

53
/ 100
Established

This tool helps machine learning engineers and researchers understand how their deep neural networks make decisions. You feed in your trained deep learning model and a dataset, and it outputs explanations of which input features (like words in a sentence or columns in a table) are most important for the model's predictions, and how these features interact with each other. This is for anyone who needs to debug, validate, or build trust in their deep learning models, particularly those working with tabular or text data.

193 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to explain the reasoning behind predictions from your TensorFlow 2.x deep neural networks, especially for tabular data or transformer-based NLP models.

Not ideal if your model is not a deep neural network, you're not using TensorFlow 2.x, or you need explanations for image-based models without custom adaptations.

model-interpretability deep-learning-explanation ai-explainability nlp-model-analysis tabular-data-insights
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

193

Forks

29

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 16, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/suinleelab/path_explain"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.