Karim-53/Compare-xAI
A Unified Approach to Evaluate and Compare Explainable AI methods
This project helps data scientists and machine learning engineers evaluate and compare different Explainable AI (xAI) methods. It takes various xAI algorithms and a set of predefined tests, then produces a benchmark of their performance across metrics like comprehensibility and portability. The output allows practitioners to understand which xAI methods are best suited for their specific models and data.
No commits in the last 6 months.
Use this if you are a data scientist or machine learning engineer who needs to systematically assess and choose the most effective Explainable AI technique for your models based on rigorous testing.
Not ideal if you are looking for a tool to build or train new machine learning models, or if you need to interpret a single model's explanation without comparing it against other xAI methods.
Stars
14
Forks
4
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
Jan 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Karim-53/Compare-xAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...