MauroLuzzatto/explainy
explainy is a Python library for generating machine learning model explanations for humans
This tool helps data scientists and machine learning engineers understand why their machine learning models make certain predictions. You provide your trained model and the data it was trained on, and it generates explanations as clear text and visual plots. This allows you to explain complex 'black-box' models to stakeholders or debug model behavior.
No commits in the last 6 months. Available on PyPI.
Use this if you need to generate human-understandable explanations for your machine learning model's predictions, either globally for the whole model or locally for individual data points.
Not ideal if you are looking for a library to build or train machine learning models, as this focuses solely on explaining existing ones.
Stars
18
Forks
1
Language
Python
License
MIT
Category
Last pushed
Apr 27, 2025
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MauroLuzzatto/explainy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shap/shap
A game theoretic approach to explain the output of any machine learning model.
mmschlk/shapiq
Shapley Interactions and Shapley Values for Machine Learning
iancovert/sage
For calculating global feature importance using Shapley values.
predict-idlab/powershap
A power-full Shapley feature selection method.
aerdem4/lofo-importance
Leave One Feature Out Importance