JonathanCrabbe/RobustXAI
This repository contains the implementation of the explanation invariance and equivariance metrics, a framework to evaluate the robustness of interpretability methods.
When you have an AI model and use interpretability methods to understand its decisions, you need to be sure those explanations are reliable. This project provides metrics to evaluate how robust these explanations are. It takes an AI model and an interpretability method as input, and outputs scores indicating the explanation's invariance and equivariance to changes in the input data. Data scientists, machine learning engineers, and AI researchers who build or deploy interpretable AI models would use this.
No commits in the last 6 months.
Use this if you need to rigorously test the reliability and consistency of your AI model's explanations under different data transformations.
Not ideal if you are looking for new interpretability methods themselves, rather than a way to evaluate existing ones.
Stars
10
Forks
2
Language
Jupyter Notebook
License
—
Last pushed
Nov 21, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JonathanCrabbe/RobustXAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...