dylan-slack/Fooling-LIME-SHAP

Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)

44
/ 100
Emerging

This project helps machine learning developers and researchers evaluate the robustness of their model explanations. It takes an existing predictive model and an explanation method (like LIME or SHAP) as input, then generates an 'attack' that can manipulate what features appear important in the explanation without changing the model's actual prediction. The output helps you understand if your explanations are easily fooled.

No commits in the last 6 months.

Use this if you are a machine learning practitioner concerned about the trustworthiness and susceptibility to manipulation of your AI model explanations.

Not ideal if you are looking for a tool to generate accurate or more robust explanations directly, rather than evaluate their weaknesses.

AI ethics model interpretability algorithmic fairness AI security machine learning auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

84

Forks

19

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 08, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dylan-slack/Fooling-LIME-SHAP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.