SquareResearchCenter-AI/BEExAI

Benchmark to Evaluate EXplainable AI

39
/ 100
Emerging

When you're working with machine learning models and need to understand *why* they make certain predictions, BEExAI helps you systematically evaluate different 'explainable AI' methods. It takes your existing tabular dataset and trained machine learning models, then quantifies how well various explanation techniques reveal the model's decision-making process. This tool is designed for AI practitioners, researchers, or anyone building and deploying models who needs to ensure their explanations are reliable and trustworthy.

No commits in the last 6 months. Available on PyPI.

Use this if you need a standardized way to compare and benchmark different 'explainable AI' (XAI) methods for tabular data and various machine learning models to pick the best one for your use case.

Not ideal if you're looking for an explanation method to integrate directly into an application without needing to compare its performance against others.

explainable AI machine learning evaluation model interpretability AI ethics data science research
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 8 / 25

How are scores calculated?

Stars

20

Forks

2

Language

Python

License

BSD-3-Clause

Last pushed

Mar 14, 2025

Commits (30d)

0

Dependencies

13

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SquareResearchCenter-AI/BEExAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.