Dependable-Intelligent-Systems-Lab/xwhy

Explaining black boxes with a SMILE: Statistical Mode-agnostic Interpretability with Local Explanations

54
/ 100
Established

This tool helps machine learning engineers and data scientists understand why their complex AI models make specific predictions. It takes any trained machine learning model and a data point, then outputs an explanation highlighting which input features were most influential for that particular prediction. This is useful for anyone who needs to ensure trust and transparency in AI-driven decisions.

Available on PyPI.

Use this if you need to explain individual predictions from your machine learning models to stakeholders, regulators, or for debugging purposes, regardless of the model's complexity or type.

Not ideal if you are looking for global model explanations (how the model works overall) rather than explanations for specific predictions.

AI explainability model interpretation machine learning auditing responsible AI data science workflow
Maintenance 10 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 14 / 25

How are scores calculated?

Stars

12

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 01, 2026

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Dependable-Intelligent-Systems-Lab/xwhy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.