Human-Centric-Machine-Learning/counterfactual-explanations-mdp
Code for "Counterfactual Explanations in Sequential Decision Making Under Uncertainty", NeurIPS 2021
This tool helps researchers and clinicians understand why a patient's cognitive behavioral therapy (CBT) treatment might have taken a certain path. By inputting patient therapy data, it generates 'what-if' scenarios showing how small changes in decisions could lead to different outcomes, making the therapy process more transparent. It's designed for data scientists or machine learning researchers working with sequential decision-making data, particularly in healthcare.
No commits in the last 6 months.
Use this if you are analyzing sequential treatment data, like patient therapy records, and need to explain why specific outcomes occurred by exploring hypothetical alternative paths.
Not ideal if you are looking for a plug-and-play clinical tool for direct patient interaction or if your data does not involve sequential decisions under uncertainty.
Stars
16
Forks
4
Language
Jupyter Notebook
License
—
Last pushed
Feb 08, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Human-Centric-Machine-Learning/counterfactual-explanations-mdp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...