wilsonjr/ClusterShapley
Explaining dimensionality results using SHAP values
When analyzing complex datasets with many features, you often reduce their dimensions to visualize or cluster them. This tool helps you understand why specific groups (clusters) form in the reduced 2D view. It takes your original dataset and the clusters identified after dimensionality reduction, and then outputs explanations detailing which original features contribute most to the formation of each cluster. Data scientists, machine learning engineers, and researchers working with high-dimensional data will find this useful for interpreting model results.
Use this if you need to explain why certain clusters appeared after applying a non-linear dimensionality reduction technique like UMAP or t-SNE to your data.
Not ideal if you are looking to explain predictions from a traditional supervised machine learning model or if your data has not undergone dimensionality reduction and clustering.
Stars
55
Forks
7
Language
C++
License
BSD-3-Clause
Category
Last pushed
Jan 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/wilsonjr/ClusterShapley"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shap/shap
A game theoretic approach to explain the output of any machine learning model.
mmschlk/shapiq
Shapley Interactions and Shapley Values for Machine Learning
iancovert/sage
For calculating global feature importance using Shapley values.
predict-idlab/powershap
A power-full Shapley feature selection method.
aerdem4/lofo-importance
Leave One Feature Out Importance