jphall663/interpretable_machine_learning_with_python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
This project helps data scientists and analysts create, explain, and test machine learning models that are easy to understand and justify. It provides methods to make complex models transparent, ensuring you can explain how a model arrives at its predictions. You'll put in your data and a trained machine learning model, and get out visualizations, explanations, and analyses of its behavior, fairness, and potential vulnerabilities.
680 stars. No commits in the last 6 months.
Use this if you need to build machine learning models that are not only accurate but also transparent, fair, and explainable to non-technical stakeholders or for regulatory compliance.
Not ideal if your primary goal is only to achieve the highest possible prediction accuracy without any concern for understanding or explaining the model's internal workings.
Stars
680
Forks
209
Language
Jupyter Notebook
License
—
Last pushed
Jun 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jphall663/interpretable_machine_learning_with_python"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...