chus-chus/teex
A Toolbox for the Evaluation of machine learning Explanations
This tool helps machine learning engineers and researchers assess how good their model's explanations are. You provide your model's generated explanations (like why an image was classified a certain way or which features are most important) and the true, human-validated explanations. It then outputs various scores and metrics to tell you if your model is explaining things correctly.
No commits in the last 6 months. Available on PyPI.
Use this if you need to objectively measure the quality of your black-box model's explanations against known ground truths for feature importance, saliency maps, decision rules, or word importance.
Not ideal if you're looking for methods to generate explanations, as this tool focuses solely on evaluating existing ones, or if you need actively maintained support.
Stars
16
Forks
—
Language
Python
License
MIT
Last pushed
Jan 07, 2024
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/chus-chus/teex"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...