TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
ELI5 helps you understand why your machine learning models make certain predictions. You provide your model and its output, and it shows you which parts of the input data (like specific words in text or regions in an image) were most influential. This is for data scientists, machine learning engineers, or anyone building and deploying predictive models who needs to explain their model's behavior.
2,773 stars. Actively maintained with 52 commits in the last 30 days.
Use this if you need to debug a machine learning model, explain its decisions to stakeholders, or gain insights into how different features impact predictions.
Not ideal if you are looking for a tool to build or train machine learning models from scratch, rather than explain existing ones.
Stars
2,773
Forks
327
Language
Jupyter Notebook
License
MIT
Last pushed
Feb 10, 2026
Commits (30d)
52
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TeamHG-Memex/eli5"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation