cloudexplain/xaiflow
Create beautiful, interactive charts for explainable AI using MLFlow
Building and deploying machine learning models can be complex. You need to understand why your models make certain predictions, communicate those insights to non-technical colleagues, and debug any unexpected behavior. This tool takes your model's SHAP explanation data and generates interactive, web-based reports that clearly visualize feature importance and individual predictions.
No commits in the last 6 months.
Use this if you need to create engaging, interactive reports to explain your machine learning models' decisions to stakeholders, validate model behavior, or debug issues, all integrated within your existing MLflow workflows.
Not ideal if you need to analyze extremely large datasets (thousands of samples or more) within the browser-based reports, as performance may degrade.
Stars
7
Forks
—
Language
JavaScript
License
MIT
Last pushed
Jul 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cloudexplain/xaiflow"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...