mandarjoshi90/triviaqa
Code for the TriviaQA reading comprehension dataset
This project helps researchers and developers working on question-answering systems to evaluate how well their models understand and answer questions based on large text documents. It takes a dataset of trivia questions and their corresponding text sources, along with your model's predicted answers, and then calculates the accuracy of those predictions. This is primarily used by natural language processing (NLP) researchers and machine learning engineers developing reading comprehension AI.
332 stars. No commits in the last 6 months.
Use this if you are building or evaluating a machine learning model designed to answer complex questions by reading large bodies of text, such as Wikipedia articles.
Not ideal if you are looking for a pre-trained question-answering model or a tool for general text analysis unrelated to evaluating reading comprehension.
Stars
332
Forks
46
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/mandarjoshi90/triviaqa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PaddlePaddle/RocketQA
🚀 RocketQA, dense retrieval for information retrieval and question answering, including both...
shuaihuaiyi/QA
使用深度å¦ä¹ ç®—æ³•å®žçŽ°çš„ä¸æ–‡é—®ç”系统
allenai/deep_qa
A deep NLP library, based on Keras / tf, focused on question answering (but useful for other NLP too)
worldbank/iQual
iQual is a package that leverages natural language processing to scale up interpretative...
fhamborg/Giveme5W1H
Extraction of the journalistic five W and one H questions (5W1H) from news articles: who did...