interpretml/DiCE
Generate Diverse Counterfactual Explanations for any machine learning model.
When a machine learning model makes a decision, like approving a loan or flagging a transaction, it often isn't clear why. This tool helps you understand how small, practical changes to the input data could flip that decision. You feed in the data point that received a particular outcome and get out a set of 'what-if' scenarios, showing you how to achieve a different result. This is for anyone who needs to explain automated decisions, such as a loan officer explaining a rejection or a healthcare professional understanding a diagnosis.
1,499 stars. No commits in the last 6 months.
Use this if you need to provide clear, actionable reasons for a machine learning model's specific output to an end-user, helping them understand how to change their situation to get a different outcome.
Not ideal if you're looking for a general overview of which features are most important to your model, rather than specific 'what-if' scenarios for individual predictions.
Stars
1,499
Forks
224
Language
Python
License
MIT
Last pushed
Jul 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/interpretml/DiCE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...