pbiecek/ema
Explanatory Model Analysis. Explore, Explain and Examine Predictive Models
When you're trying to understand why a predictive model made a certain decision, this tool helps you explore its inner workings. It takes an existing predictive model and shows you how it arrived at its predictions, making it easier to trust and explain the model's output. Data scientists, machine learning engineers, and researchers can use this to gain insights into complex algorithms.
198 stars. No commits in the last 6 months.
Use this if you need to explain the decisions of a machine learning model to stakeholders or debug why a model behaves a certain way.
Not ideal if you are looking for a tool to build or train predictive models from scratch.
Stars
198
Forks
39
Language
Jupyter Notebook
License
—
Last pushed
Apr 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pbiecek/ema"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...