FernandoLpz/SHAP-Classification-example

This repository contains an example of how to implement the shap library to interpret a machine learning model.

27
/ 100
Experimental

This project helps data scientists and machine learning engineers understand why a machine learning model makes certain predictions. You provide your preprocessed dataset and a trained classification model, and it outputs explanations for the model's behavior. This allows you to interpret how different features influence the model's decisions.

No commits in the last 6 months.

Use this if you need to explain the predictions of your machine learning classification models, especially in critical applications like healthcare or finance where transparency is key.

Not ideal if you are looking for a tool to build or train a machine learning model from scratch, as this project focuses solely on model interpretation.

machine-learning-interpretation model-explainability classification-analysis data-science-workflow
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

10

Forks

3

Language

Jupyter Notebook

License

Last pushed

Jul 13, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/FernandoLpz/SHAP-Classification-example"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.