dylan-slack/Fooling-LIME-SHAP
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
This project helps machine learning developers and researchers evaluate the robustness of their model explanations. It takes an existing predictive model and an explanation method (like LIME or SHAP) as input, then generates an 'attack' that can manipulate what features appear important in the explanation without changing the model's actual prediction. The output helps you understand if your explanations are easily fooled.
No commits in the last 6 months.
Use this if you are a machine learning practitioner concerned about the trustworthiness and susceptibility to manipulation of your AI model explanations.
Not ideal if you are looking for a tool to generate accurate or more robust explanations directly, rather than evaluate their weaknesses.
Stars
84
Forks
19
Language
Jupyter Notebook
License
MIT
Last pushed
Dec 08, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dylan-slack/Fooling-LIME-SHAP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...