shap and Shapley_regressions
SHAP is a widely-adopted production library for model-agnostic feature attribution via Shapley values, while Shapley_regressions is a specialized academic tool for conducting statistical inference on Shapley-based explanations—making them complements that address different stages of the explainability workflow.
About shap
shap/shap
A game theoretic approach to explain the output of any machine learning model.
This tool helps data scientists and machine learning engineers understand why their machine learning models make specific predictions. By taking a trained model and input data, it shows how much each individual feature contributes to the final output, clarifying complex model behavior. It's designed for anyone building or using ML models who needs to explain their results, like a business analyst evaluating a credit risk model or a medical researcher interpreting a diagnostic tool.
About Shapley_regressions
bank-of-england/Shapley_regressions
Statistical inference on machine learning or general non-parametric models
This project helps economists, financial analysts, and other researchers understand the drivers and predictions from complex machine learning models by presenting their outputs in a familiar regression table format. It takes in macroeconomic time series data and machine learning model predictions, producing an interpretable regression table that shows the statistical significance and impact of different input variables on the model's output. This allows you to explain 'black-box' model decisions using well-understood statistical inference techniques.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work