epfl-ml4ed/evaluating-explainers
Comparing 5 different XAI techniques (LIME, PermSHAP, KernelSHAP, DiCE, CEM) through quantitative metrics. Published at EDM 2022.
This project helps educational researchers and learning scientists understand why a student might succeed or fail in a Massive Open Online Course (MOOC). You input data about student behavior and course interactions into a 'black-box' prediction model. The project then generates insights into which student actions or features are most important for predicting their success, helping you interpret how different explanation techniques impact these insights.
No commits in the last 6 months.
Use this if you are an educational researcher evaluating different methods to explain student success predictions in MOOCs and want to understand the reliability of those explanations.
Not ideal if you are looking for a tool to build or deploy new predictive models, or if you need to explain models outside of educational contexts.
Stars
17
Forks
3
Language
PureBasic
License
MIT
Last pushed
Jul 25, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/epfl-ml4ed/evaluating-explainers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...