ymcui/expmrc
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
This project offers a benchmark for evaluating how well machine reading comprehension (MRC) models can explain their answers. It takes a question and a document as input and expects not only the correct answer but also the specific text span from the document that serves as evidence. This is for AI researchers and practitioners who develop and deploy language models and need to ensure their models provide transparent and justifiable responses.
No commits in the last 6 months.
Use this if you are building or evaluating machine reading comprehension models and need a standardized way to assess their ability to provide explanations (evidence) for their answers.
Not ideal if you are looking for a pre-trained model or a tool to directly apply MRC without needing to evaluate or improve its explainability.
Stars
62
Forks
4
Language
Python
License
CC-BY-SA-4.0
Category
Last pushed
Aug 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ymcui/expmrc"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ymcui/cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
thunlp/MultiRD
Code and data of the AAAI-20 paper "Multi-channel Reverse Dictionary Model"
princeton-nlp/DensePhrases
[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval...
IndexFziQ/KMRC-Papers
A list of recent papers regarding knowledge-based machine reading comprehension.
danqi/rc-cnn-dailymail
CNN/Daily Mail Reading Comprehension Task