ehsk/OpenQA-eval

ACL 2023: Evaluating Open-Domain Question Answering in the Era of Large Language Models

27
/ 100
Experimental

This tool helps researchers and developers evaluate the accuracy of open-domain question answering (QA) systems, especially those powered by large language models. You input a file of questions and the answers generated by your QA model, and it outputs performance metrics, including comparisons against human judgments and LLM-based evaluations. It's designed for anyone building or assessing the quality of automated question-answering systems.

No commits in the last 6 months.

Use this if you need to accurately measure how well your open-domain QA model, particularly one using large language models, answers questions compared to established metrics and human judgment.

Not ideal if you are looking for a tool to build or train a question-answering model rather than evaluate its performance.

question-answering natural-language-processing model-evaluation AI-assessment LLM-performance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

47

Forks

1

Language

Python

License

MIT

Last pushed

Jan 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ehsk/OpenQA-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.