chus-chus/teex

A Toolbox for the Evaluation of machine learning Explanations

31
/ 100
Emerging

This tool helps machine learning engineers and researchers assess how good their model's explanations are. You provide your model's generated explanations (like why an image was classified a certain way or which features are most important) and the true, human-validated explanations. It then outputs various scores and metrics to tell you if your model is explaining things correctly.

No commits in the last 6 months. Available on PyPI.

Use this if you need to objectively measure the quality of your black-box model's explanations against known ground truths for feature importance, saliency maps, decision rules, or word importance.

Not ideal if you're looking for methods to generate explanations, as this tool focuses solely on evaluating existing ones, or if you need actively maintained support.

Machine Learning Evaluation Explainable AI Model Interpretability Data Science Research AI Trust and Safety
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

MIT

Last pushed

Jan 07, 2024

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/chus-chus/teex"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.