interpretml/interpret
Fit interpretable models. Explain blackbox machine learning.
This project helps data scientists, analysts, and domain experts understand why their machine learning models make certain predictions. You input your trained model and data, and it outputs clear explanations, showing how different factors influence predictions globally and for individual cases. This is useful for anyone who needs to trust, debug, or explain their models to stakeholders or for regulatory compliance.
6,813 stars. Actively maintained with 44 commits in the last 30 days.
Use this if you need to understand the underlying logic of your predictive models, whether for debugging, improving performance, ensuring fairness, or meeting compliance requirements.
Not ideal if your primary goal is to simply train a highly accurate model without needing to understand its internal decision-making process.
Stars
6,813
Forks
778
Language
C++
License
MIT
Last pushed
Mar 13, 2026
Commits (30d)
44
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/interpretml/interpret"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Community Discussion
Recent Releases
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...