Kyung-Min/CompareModels_TRECQA
Compare six baseline deep learning models on TrecQA
When building a Question Answering (QA) system from a large body of text, selecting the most relevant sentence to answer a question is a critical challenge. This project helps evaluate different computational approaches for this task. You can input a question and an unstructured text corpus, and it helps you understand how accurately various deep learning models pinpoint the best answer sentence. This is useful for researchers and developers building or refining QA systems.
No commits in the last 6 months.
Use this if you are developing a question-answering system and need to compare different baseline deep learning models for selecting the best answer sentence from unstructured text.
Not ideal if you are looking for an out-of-the-box, production-ready question answering system, or if you are not a developer working on the underlying models.
Stars
60
Forks
21
Language
Jupyter Notebook
License
—
Category
Last pushed
May 08, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Kyung-Min/CompareModels_TRECQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
asahi417/lm-question-generation
Multilingual/multidomain question generation datasets, models, and python library for question...
SparkJiao/SLQA
An Unofficial Pytorch Implementation of Multi-Granularity Hierarchical Attention Fusion Networks...
MurtyShikhar/Question-Answering
TensorFlow implementation of Match-LSTM and Answer pointer for the popular SQuAD dataset.
hsinyuan-huang/FlowQA
Implementation of conversational QA model: FlowQA (with slight improvement)
allenai/aokvqa
Official repository for the A-OKVQA dataset