ymcui/expmrc

ExpMRC: Explainability Evaluation for Machine Reading Comprehension

32
/ 100
Emerging

This project offers a benchmark for evaluating how well machine reading comprehension (MRC) models can explain their answers. It takes a question and a document as input and expects not only the correct answer but also the specific text span from the document that serves as evidence. This is for AI researchers and practitioners who develop and deploy language models and need to ensure their models provide transparent and justifiable responses.

No commits in the last 6 months.

Use this if you are building or evaluating machine reading comprehension models and need a standardized way to assess their ability to provide explanations (evidence) for their answers.

Not ideal if you are looking for a pre-trained model or a tool to directly apply MRC without needing to evaluate or improve its explainability.

natural-language-processing machine-reading-comprehension explainable-ai model-evaluation text-understanding
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

62

Forks

4

Language

Python

License

CC-BY-SA-4.0

Last pushed

Aug 30, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ymcui/expmrc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.