SeldonIO/alibi
Algorithms for explaining machine learning models
This tool helps data scientists and machine learning engineers understand why their machine learning models make specific predictions. By feeding in a trained model and data, it reveals which features most influenced a decision, whether for an image, text, or tabular data. This insight helps explain model behavior to stakeholders and build trust in AI systems.
2,617 stars. Used by 1 other package. Available on PyPI.
Use this if you need to explain the reasoning behind a machine learning model's predictions to non-technical users or for debugging model behavior.
Not ideal if you're looking for tools to detect outliers, concept drift, or adversarial attacks in your data, which are addressed by its sister project, Alibi-Detect.
Stars
2,617
Forks
266
Language
Python
License
—
Last pushed
Oct 17, 2025
Commits (30d)
0
Dependencies
15
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SeldonIO/alibi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...