BCG-X-Official/facet

Human-explainable AI.

52
/ 100
Established

This tool helps data scientists and machine learning engineers understand why their supervised machine learning models make certain predictions. It takes a trained model and its input data to reveal which features are most important and how they interact. This helps users go beyond just knowing what a model predicts, to understanding the underlying factors driving those predictions and exploring 'what-if' scenarios.

531 stars.

Use this if you need to explain the reasoning behind your supervised machine learning model's predictions, understand feature dependencies, or simulate how changes in input data might affect outcomes.

Not ideal if you are working with unsupervised learning models, deep learning models requiring specialized interpretability methods, or if your primary goal is model deployment rather than explanation.

machine-learning-interpretability model-explanation predictive-analytics feature-analysis data-science-workflow
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

531

Forks

46

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BCG-X-Official/facet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.