Kyung-Min/CompareModels_TRECQA

Compare six baseline deep learning models on TrecQA

35
/ 100
Emerging

When building a Question Answering (QA) system from a large body of text, selecting the most relevant sentence to answer a question is a critical challenge. This project helps evaluate different computational approaches for this task. You can input a question and an unstructured text corpus, and it helps you understand how accurately various deep learning models pinpoint the best answer sentence. This is useful for researchers and developers building or refining QA systems.

No commits in the last 6 months.

Use this if you are developing a question-answering system and need to compare different baseline deep learning models for selecting the best answer sentence from unstructured text.

Not ideal if you are looking for an out-of-the-box, production-ready question answering system, or if you are not a developer working on the underlying models.

Question Answering Natural Language Processing Information Retrieval Deep Learning Research Text Analytics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

60

Forks

21

Language

Jupyter Notebook

License

Last pushed

May 08, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Kyung-Min/CompareModels_TRECQA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.