wilsonjr/ClusterShapley

Explaining dimensionality results using SHAP values

42
/ 100
Emerging

When analyzing complex datasets with many features, you often reduce their dimensions to visualize or cluster them. This tool helps you understand why specific groups (clusters) form in the reduced 2D view. It takes your original dataset and the clusters identified after dimensionality reduction, and then outputs explanations detailing which original features contribute most to the formation of each cluster. Data scientists, machine learning engineers, and researchers working with high-dimensional data will find this useful for interpreting model results.

Use this if you need to explain why certain clusters appeared after applying a non-linear dimensionality reduction technique like UMAP or t-SNE to your data.

Not ideal if you are looking to explain predictions from a traditional supervised machine learning model or if your data has not undergone dimensionality reduction and clustering.

data-analysis machine-learning-interpretation feature-importance clustering-explanation dimensionality-reduction
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

55

Forks

7

Language

C++

License

BSD-3-Clause

Last pushed

Jan 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/wilsonjr/ClusterShapley"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.