IndexFziQ/MSMARCO-MRC-Analysis

Analysis on the MS-MARCO leaderboard regarding the machine reading comprehension task.

37
/ 100
Emerging

This project offers a breakdown of how well different AI models perform on the MS MARCO benchmark for machine reading comprehension. It takes in various AI model results and details about the MS MARCO dataset, then outputs a comparison of model accuracy in generating human-like answers to real-world questions. Anyone involved in developing or evaluating natural language processing (NLP) systems for question answering would find this useful.

No commits in the last 6 months.

Use this if you need to understand the historical performance of various AI models on a large-scale, real-world question-answering benchmark.

Not ideal if you are looking for an actively maintained and updated leaderboard, as the MS MARCO Q&A missions have been retired.

natural-language-processing question-answering machine-reading-comprehension AI-model-evaluation NLP-benchmarking
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

21

Forks

4

Language

License

MIT

Last pushed

Dec 14, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/IndexFziQ/MSMARCO-MRC-Analysis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.