epfl-ml4ed/evaluating-explainers

Comparing 5 different XAI techniques (LIME, PermSHAP, KernelSHAP, DiCE, CEM) through quantitative metrics. Published at EDM 2022.

35
/ 100
Emerging

This project helps educational researchers and learning scientists understand why a student might succeed or fail in a Massive Open Online Course (MOOC). You input data about student behavior and course interactions into a 'black-box' prediction model. The project then generates insights into which student actions or features are most important for predicting their success, helping you interpret how different explanation techniques impact these insights.

No commits in the last 6 months.

Use this if you are an educational researcher evaluating different methods to explain student success predictions in MOOCs and want to understand the reliability of those explanations.

Not ideal if you are looking for a tool to build or deploy new predictive models, or if you need to explain models outside of educational contexts.

educational-data-mining learning-analytics student-success-prediction MOOCs explainable-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

17

Forks

3

Language

PureBasic

License

MIT

Last pushed

Jul 25, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/epfl-ml4ed/evaluating-explainers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.