Dependable-Intelligent-Systems-Lab/xwhy
Explaining black boxes with a SMILE: Statistical Mode-agnostic Interpretability with Local Explanations
This tool helps machine learning engineers and data scientists understand why their complex AI models make specific predictions. It takes any trained machine learning model and a data point, then outputs an explanation highlighting which input features were most influential for that particular prediction. This is useful for anyone who needs to ensure trust and transparency in AI-driven decisions.
Available on PyPI.
Use this if you need to explain individual predictions from your machine learning models to stakeholders, regulators, or for debugging purposes, regardless of the model's complexity or type.
Not ideal if you are looking for global model explanations (how the model works overall) rather than explanations for specific predictions.
Stars
12
Forks
3
Language
Jupyter Notebook
License
MIT
Last pushed
Feb 01, 2026
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Dependable-Intelligent-Systems-Lab/xwhy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...