shap and sage
SHAP is a mature, general-purpose library for computing Shapley-based explanations across multiple methods (SHAP, LIME, etc.), while SAGE is a specialized research tool focused specifically on Shapley-based feature importance estimation, making them **complements** for practitioners who want both broad explainability capabilities and specialized global importance metrics.
About shap
shap/shap
A game theoretic approach to explain the output of any machine learning model.
This tool helps data scientists and machine learning engineers understand why their machine learning models make specific predictions. By taking a trained model and input data, it shows how much each individual feature contributes to the final output, clarifying complex model behavior. It's designed for anyone building or using ML models who needs to explain their results, like a business analyst evaluating a credit risk model or a medical researcher interpreting a diagnostic tool.
About sage
iancovert/sage
For calculating global feature importance using Shapley values.
This tool helps data scientists and machine learning engineers understand why their "black-box" machine learning models make certain predictions. You provide a trained model and its training data, and it outputs a breakdown of how much each input feature contributes to the model's predictive power. This helps you explain complex model behaviors to stakeholders or debug unexpected results.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work