mandarjoshi90/triviaqa

Code for the TriviaQA reading comprehension dataset

45
/ 100
Emerging

This project helps researchers and developers working on question-answering systems to evaluate how well their models understand and answer questions based on large text documents. It takes a dataset of trivia questions and their corresponding text sources, along with your model's predicted answers, and then calculates the accuracy of those predictions. This is primarily used by natural language processing (NLP) researchers and machine learning engineers developing reading comprehension AI.

332 stars. No commits in the last 6 months.

Use this if you are building or evaluating a machine learning model designed to answer complex questions by reading large bodies of text, such as Wikipedia articles.

Not ideal if you are looking for a pre-trained question-answering model or a tool for general text analysis unrelated to evaluating reading comprehension.

Natural Language Processing Reading Comprehension Question Answering AI Model Evaluation NLP Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

332

Forks

46

Language

Python

License

Apache-2.0

Last pushed

Apr 05, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/mandarjoshi90/triviaqa"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.