solegalli/machine-learning-interpretability

Code repository for the online course Machine Learning Interpretability

42
/ 100
Emerging

When you need to understand why your machine learning models make certain predictions, this resource provides the techniques. It takes your trained models and helps you uncover which features drive their decisions, and to what extent. This is for data scientists, machine learning engineers, and analysts who need to explain complex model behaviors to stakeholders or for regulatory compliance.

No commits in the last 6 months.

Use this if you need to explain how a machine learning model arrives at its predictions, both for individual cases and across its entire behavior.

Not ideal if you are looking for a pre-built, production-ready monitoring system for model drift or ethical AI, rather than foundational interpretability techniques.

model-explanation machine-learning-auditing AI-explainability data-science-workflow predictive-model-understanding
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

30

Forks

21

Language

Jupyter Notebook

License

Last pushed

Oct 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/solegalli/machine-learning-interpretability"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.