TrusteeML/trustee

This package implements the trustee framework to extract decision tree explanation from black-box ML models.

47
/ 100
Emerging

This tool helps data scientists and machine learning engineers understand why their "black-box" AI models make certain predictions. You feed it your trained machine learning model and the data it was trained on, and it outputs a simpler decision tree that explains the original model's logic. This is for professionals who need to justify or troubleshoot complex AI systems.

No commits in the last 6 months. Available on PyPI.

Use this if you need to explain the reasoning behind predictions made by complex machine learning models in a clear, human-understandable way.

Not ideal if your primary goal is to improve model accuracy or performance, as this tool focuses on interpretability rather than direct model optimization.

machine-learning-interpretability model-auditing AI-explainability data-science network-security
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

33

Forks

6

Language

Python

License

GPL-3.0

Last pushed

May 20, 2024

Commits (30d)

0

Dependencies

12

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TrusteeML/trustee"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.