JonathanCrabbe/RobustXAI

This repository contains the implementation of the explanation invariance and equivariance metrics, a framework to evaluate the robustness of interpretability methods.

26
/ 100
Experimental

When you have an AI model and use interpretability methods to understand its decisions, you need to be sure those explanations are reliable. This project provides metrics to evaluate how robust these explanations are. It takes an AI model and an interpretability method as input, and outputs scores indicating the explanation's invariance and equivariance to changes in the input data. Data scientists, machine learning engineers, and AI researchers who build or deploy interpretable AI models would use this.

No commits in the last 6 months.

Use this if you need to rigorously test the reliability and consistency of your AI model's explanations under different data transformations.

Not ideal if you are looking for new interpretability methods themselves, rather than a way to evaluate existing ones.

AI-explainability model-robustness interpretability-evaluation machine-learning-auditing trustworthy-AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

10

Forks

2

Language

Jupyter Notebook

License

Last pushed

Nov 21, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JonathanCrabbe/RobustXAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.