TrusteeML/trustee
This package implements the trustee framework to extract decision tree explanation from black-box ML models.
This tool helps data scientists and machine learning engineers understand why their "black-box" AI models make certain predictions. You feed it your trained machine learning model and the data it was trained on, and it outputs a simpler decision tree that explains the original model's logic. This is for professionals who need to justify or troubleshoot complex AI systems.
No commits in the last 6 months. Available on PyPI.
Use this if you need to explain the reasoning behind predictions made by complex machine learning models in a clear, human-understandable way.
Not ideal if your primary goal is to improve model accuracy or performance, as this tool focuses on interpretability rather than direct model optimization.
Stars
33
Forks
6
Language
Python
License
GPL-3.0
Last pushed
May 20, 2024
Commits (30d)
0
Dependencies
12
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TrusteeML/trustee"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...