iancovert/sage

For calculating global feature importance using Shapley values.

56
/ 100
Established

This tool helps data scientists and machine learning engineers understand why their "black-box" machine learning models make certain predictions. You provide a trained model and its training data, and it outputs a breakdown of how much each input feature contributes to the model's predictive power. This helps you explain complex model behaviors to stakeholders or debug unexpected results.

285 stars.

Use this if you need to determine the global importance of different features in your machine learning model, especially when dealing with complex models where direct interpretation is difficult.

Not ideal if you're looking for explanations of individual predictions rather than overall feature importance, or if you only work with inherently interpretable models.

machine-learning-explainability model-auditing feature-importance data-science model-debugging
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

285

Forks

34

Language

Python

License

MIT

Last pushed

Mar 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iancovert/sage"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

Compare