interpretml/DiCE

Generate Diverse Counterfactual Explanations for any machine learning model.

50
/ 100
Established

When a machine learning model makes a decision, like approving a loan or flagging a transaction, it often isn't clear why. This tool helps you understand how small, practical changes to the input data could flip that decision. You feed in the data point that received a particular outcome and get out a set of 'what-if' scenarios, showing you how to achieve a different result. This is for anyone who needs to explain automated decisions, such as a loan officer explaining a rejection or a healthcare professional understanding a diagnosis.

1,499 stars. No commits in the last 6 months.

Use this if you need to provide clear, actionable reasons for a machine learning model's specific output to an end-user, helping them understand how to change their situation to get a different outcome.

Not ideal if you're looking for a general overview of which features are most important to your model, rather than specific 'what-if' scenarios for individual predictions.

loan-applications credit-scoring healthcare-diagnostics student-admissions fraud-detection
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

1,499

Forks

224

Language

Python

License

MIT

Last pushed

Jul 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/interpretml/DiCE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.