AmirhosseinHonardoust/Shap-Mini
A minimal, reproducible explainable-AI demo using SHAP values on tabular data. Trains RandomForest or LogisticRegression models, computes global and local feature importances, and visualizes results through summary and dependence plots, all in under 100 lines of Python.
When you have a machine learning model built on a table of data and need to understand why it makes certain predictions, this project helps. It takes your tabular data and a trained model, then generates visual explanations like 'which features are most important overall' and 'how a specific feature impacts predictions'. This is for data analysts, business intelligence professionals, or anyone who uses simple classification models and needs to explain their decisions clearly to stakeholders.
Use this if you need to quickly generate clear, visual explanations for a tabular machine learning model's predictions, especially when using common models like Random Forest or Logistic Regression.
Not ideal if your models are complex deep learning networks, handle image/text data, or if you need highly customized, interactive explainability dashboards.
Stars
20
Forks
1
Language
Python
License
MIT
Category
Last pushed
Nov 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AmirhosseinHonardoust/Shap-Mini"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shap/shap
A game theoretic approach to explain the output of any machine learning model.
mmschlk/shapiq
Shapley Interactions and Shapley Values for Machine Learning
iancovert/sage
For calculating global feature importance using Shapley values.
aerdem4/lofo-importance
Leave One Feature Out Importance
predict-idlab/powershap
A power-full Shapley feature selection method.