interpretml/interpret

Fit interpretable models. Explain blackbox machine learning.

67
/ 100
Established

This project helps data scientists, analysts, and domain experts understand why their machine learning models make certain predictions. You input your trained model and data, and it outputs clear explanations, showing how different factors influence predictions globally and for individual cases. This is useful for anyone who needs to trust, debug, or explain their models to stakeholders or for regulatory compliance.

6,813 stars. Actively maintained with 44 commits in the last 30 days.

Use this if you need to understand the underlying logic of your predictive models, whether for debugging, improving performance, ensuring fairness, or meeting compliance requirements.

Not ideal if your primary goal is to simply train a highly accurate model without needing to understand its internal decision-making process.

model-debugging regulatory-compliance fairness-auditing AI-explainability predictive-analytics
No Package No Dependents
Maintenance 20 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

6,813

Forks

778

Language

C++

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

44

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/interpretml/interpret"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.