PAIR-code/what-if-tool
Source code/webpage/demos for the What-If Tool
This tool helps non-technical users explore and understand how a machine learning model makes its predictions. You input a trained classification or regression model and a dataset, then it visually displays the model's outputs and performance. It's designed for anyone who needs to evaluate model behavior, identify biases, or explain predictions without writing any code.
992 stars.
Use this if you need to quickly visualize and interact with a trained ML model to understand its predictions and evaluate fairness or performance across different data subsets.
Not ideal if you are looking to train a new machine learning model from scratch or if you need to deploy models into production systems.
Stars
992
Forks
180
Language
HTML
License
Apache-2.0
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PAIR-code/what-if-tool"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...