iancovert/sage
For calculating global feature importance using Shapley values.
This tool helps data scientists and machine learning engineers understand why their "black-box" machine learning models make certain predictions. You provide a trained model and its training data, and it outputs a breakdown of how much each input feature contributes to the model's predictive power. This helps you explain complex model behaviors to stakeholders or debug unexpected results.
285 stars.
Use this if you need to determine the global importance of different features in your machine learning model, especially when dealing with complex models where direct interpretation is difficult.
Not ideal if you're looking for explanations of individual predictions rather than overall feature importance, or if you only work with inherently interpretable models.
Stars
285
Forks
34
Language
Python
License
MIT
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iancovert/sage"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
shap/shap
A game theoretic approach to explain the output of any machine learning model.
mmschlk/shapiq
Shapley Interactions and Shapley Values for Machine Learning
predict-idlab/powershap
A power-full Shapley feature selection method.
aerdem4/lofo-importance
Leave One Feature Out Importance
ReX-XAI/ReX
Causal Responsibility EXplanations for Image Classifiers and Tabular Data