interpretml/interpret-community

Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world datasets and workflows.

59
/ 100
Established

This tool helps data scientists understand why their machine learning models make certain predictions, especially for models trained on tabular data. It takes your trained model and data, and outputs explanations about feature importance or model behavior, even for complex deep learning or ensemble models. Data scientists can use this to debug models, ensure fairness, or build trust with stakeholders by explaining model decisions.

442 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you need to explain the predictions of your machine learning models built with Python, particularly those working with tabular data.

Not ideal if your primary need is to explain models based on image, audio, or natural language data that isn't represented in a tabular format.

machine-learning-explainability data-science-workflow model-debugging feature-importance responsible-AI
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 23 / 25

How are scores calculated?

Stars

442

Forks

88

Language

Python

License

MIT

Last pushed

Feb 07, 2025

Commits (30d)

0

Dependencies

9

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/interpretml/interpret-community"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.