ModelOriented/kernelshap
Different SHAP algorithms
When working with complex models in R, understanding why a model makes a specific prediction can be challenging. This tool helps practitioners interpret model predictions by calculating SHAP values, which show how much each input feature contributes to the final output. It takes a trained model and a dataset as input and outputs numerical SHAP values that can then be visualized to explain individual predictions or overall model behavior. This is useful for data scientists, statisticians, or analysts who build and deploy predictive models and need to explain their reasoning to stakeholders.
No commits in the last 6 months.
Use this if you need to explain the individual feature contributions to predictions from your R-based machine learning models, especially for tree-based, generalized additive, or neural network models.
Not ideal if your primary goal is model interpretability for non-R environments or if you are looking for a simple, non-technical explanation of model logic without diving into feature attribution values.
Stars
60
Forks
7
Language
R
License
GPL-2.0
Category
Last pushed
Sep 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ModelOriented/kernelshap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
shap/shap
A game theoretic approach to explain the output of any machine learning model.
mmschlk/shapiq
Shapley Interactions and Shapley Values for Machine Learning
iancovert/sage
For calculating global feature importance using Shapley values.
aerdem4/lofo-importance
Leave One Feature Out Importance
predict-idlab/powershap
A power-full Shapley feature selection method.