BCG-X-Official/facet
Human-explainable AI.
This tool helps data scientists and machine learning engineers understand why their supervised machine learning models make certain predictions. It takes a trained model and its input data to reveal which features are most important and how they interact. This helps users go beyond just knowing what a model predicts, to understanding the underlying factors driving those predictions and exploring 'what-if' scenarios.
531 stars.
Use this if you need to explain the reasoning behind your supervised machine learning model's predictions, understand feature dependencies, or simulate how changes in input data might affect outcomes.
Not ideal if you are working with unsupervised learning models, deep learning models requiring specialized interpretability methods, or if your primary goal is model deployment rather than explanation.
Stars
531
Forks
46
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
Feb 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BCG-X-Official/facet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...