MarcelRobeer/explabox
Explore/examine/explain/expose your model with the explabox!
This tool helps data science teams and auditors understand why their text-based AI models make certain decisions. It takes your existing text data and AI model as input, then generates easy-to-understand reports and visualizations. The output clearly shows how the model performs, its sensitivities, and individual prediction explanations, helping data scientists, AI ethicists, and compliance officers ensure fairness and robustness.
No commits in the last 6 months.
Use this if you need to comprehensively audit, understand, and explain the behavior of your text-based machine learning models to various stakeholders, from technical teams to legal and ethical oversight.
Not ideal if your models do not process natural language text or if you primarily need to build new models rather than analyze existing ones.
Stars
19
Forks
—
Language
Python
License
LGPL-3.0
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MarcelRobeer/explabox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...