daikikatsuragawa/awesome-counterfactual-explanations
This repository is a curated collection of information (keywords, papers, libraries, books, etc.) about counterfactual explanations🙃 Contributions are welcome! Our maintenance capacity is limited, so we highly appreciate pull requests.
This collection helps you understand why an AI model made a particular decision and what minimal changes would lead to a different outcome. For example, if a loan application was denied, it can show what specific factors (like income or credit score) would need to change for approval. This resource is for anyone who uses or is impacted by AI systems and needs to explain or influence their decisions, such as data scientists, risk managers, or policy makers.
No commits in the last 6 months.
Use this if you need to understand the 'why' behind an AI model's decision and want to identify specific, actionable steps to change that outcome.
Not ideal if you are looking for a software tool or library to directly implement counterfactual explanations, as this is primarily a curated information resource.
Stars
23
Forks
—
Language
—
License
MIT
Last pushed
Oct 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/daikikatsuragawa/awesome-counterfactual-explanations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...