jphall663/interpretable_machine_learning_with_python

Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.

43
/ 100
Emerging

This project helps data scientists and analysts create, explain, and test machine learning models that are easy to understand and justify. It provides methods to make complex models transparent, ensuring you can explain how a model arrives at its predictions. You'll put in your data and a trained machine learning model, and get out visualizations, explanations, and analyses of its behavior, fairness, and potential vulnerabilities.

680 stars. No commits in the last 6 months.

Use this if you need to build machine learning models that are not only accurate but also transparent, fair, and explainable to non-technical stakeholders or for regulatory compliance.

Not ideal if your primary goal is only to achieve the highest possible prediction accuracy without any concern for understanding or explaining the model's internal workings.

Machine Learning Explainability Model Debugging AI Ethics Regulatory Compliance Data Science
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 25 / 25

How are scores calculated?

Stars

680

Forks

209

Language

Jupyter Notebook

License

Last pushed

Jun 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jphall663/interpretable_machine_learning_with_python"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.