ModelOriented/kernelshap

Different SHAP algorithms

38
/ 100
Emerging

When working with complex models in R, understanding why a model makes a specific prediction can be challenging. This tool helps practitioners interpret model predictions by calculating SHAP values, which show how much each input feature contributes to the final output. It takes a trained model and a dataset as input and outputs numerical SHAP values that can then be visualized to explain individual predictions or overall model behavior. This is useful for data scientists, statisticians, or analysts who build and deploy predictive models and need to explain their reasoning to stakeholders.

No commits in the last 6 months.

Use this if you need to explain the individual feature contributions to predictions from your R-based machine learning models, especially for tree-based, generalized additive, or neural network models.

Not ideal if your primary goal is model interpretability for non-R environments or if you are looking for a simple, non-technical explanation of model logic without diving into feature attribution values.

model-interpretability feature-attribution predictive-analytics machine-learning-explanation R-programming
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

60

Forks

7

Language

R

License

GPL-2.0

Last pushed

Sep 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ModelOriented/kernelshap"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.