SeldonIO/alibi

Algorithms for explaining machine learning models

62
/ 100
Established

This tool helps data scientists and machine learning engineers understand why their machine learning models make specific predictions. By feeding in a trained model and data, it reveals which features most influenced a decision, whether for an image, text, or tabular data. This insight helps explain model behavior to stakeholders and build trust in AI systems.

2,617 stars. Used by 1 other package. Available on PyPI.

Use this if you need to explain the reasoning behind a machine learning model's predictions to non-technical users or for debugging model behavior.

Not ideal if you're looking for tools to detect outliers, concept drift, or adversarial attacks in your data, which are addressed by its sister project, Alibi-Detect.

machine-learning-explainability model-auditing AI-transparency data-science predictive-modeling
Maintenance 6 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

2,617

Forks

266

Language

Python

License

Last pushed

Oct 17, 2025

Commits (30d)

0

Dependencies

15

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SeldonIO/alibi"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.