shap and shapiq

While SHAP is a mature, general-purpose library for computing Shapley values and SHAP interactions across diverse model types, ShapIQ is a specialized library focused specifically on higher-order Shapley interactions (n-way feature interactions), making them **complements** that users might combine when investigating both individual feature importance and complex multi-feature interaction effects.

shap
82
Verified
shapiq
68
Established
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 22/25
Maintenance 17/25
Adoption 11/25
Maturity 25/25
Community 15/25
Stars: 25,115
Forks: 3,481
Downloads:
Commits (30d): 21
Language: Jupyter Notebook
License: MIT
Stars: 695
Forks: 48
Downloads:
Commits (30d): 11
Language: Python
License: MIT
No risk flags
No risk flags

About shap

shap/shap

A game theoretic approach to explain the output of any machine learning model.

This tool helps data scientists and machine learning engineers understand why their machine learning models make specific predictions. By taking a trained model and input data, it shows how much each individual feature contributes to the final output, clarifying complex model behavior. It's designed for anyone building or using ML models who needs to explain their results, like a business analyst evaluating a credit risk model or a medical researcher interpreting a diagnostic tool.

model-interpretability machine-learning-explanation AI-explainability predictive-modeling-auditing feature-importance

About shapiq

mmschlk/shapiq

Shapley Interactions and Shapley Values for Machine Learning

This tool helps data scientists and machine learning practitioners understand why their models make certain predictions. You provide a trained machine learning model and your dataset, and it shows you how individual features and combinations of features contribute to a specific prediction. This goes beyond just knowing which features are important, revealing how they interact with each other.

Machine Learning Explainability Model Interpretation Feature Interaction Analysis AI Trust and Transparency

Scores updated daily from GitHub, PyPI, and npm data. How scores work