csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
This tool helps non-technical practitioners understand why a machine learning model makes certain predictions. It takes your dataset as input and generates easily interpretable rules or decision trees instead of complex 'black box' models. This allows anyone, from healthcare professionals to financial analysts, to gain insights into the driving factors behind a model's output.
1,574 stars. Used by 1 other package. Actively maintained with 1 commit in the last 30 days. Available on PyPI.
Use this if you need to build predictive models where transparency and explainability are as crucial as accuracy, for example, in high-stakes decision-making scenarios like medical diagnostics or loan approvals.
Not ideal if your primary goal is maximum predictive accuracy at all costs, and you don't require human-understandable explanations for the model's decisions.
Stars
1,574
Forks
136
Language
Jupyter Notebook
License
MIT
Last pushed
Feb 24, 2026
Commits (30d)
1
Dependencies
8
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csinva/imodels"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation