ehsk/OpenQA-eval
ACL 2023: Evaluating Open-Domain Question Answering in the Era of Large Language Models
This tool helps researchers and developers evaluate the accuracy of open-domain question answering (QA) systems, especially those powered by large language models. You input a file of questions and the answers generated by your QA model, and it outputs performance metrics, including comparisons against human judgments and LLM-based evaluations. It's designed for anyone building or assessing the quality of automated question-answering systems.
No commits in the last 6 months.
Use this if you need to accurately measure how well your open-domain QA model, particularly one using large language models, answers questions compared to established metrics and human judgment.
Not ideal if you are looking for a tool to build or train a question-answering model rather than evaluate its performance.
Stars
47
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jan 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ehsk/OpenQA-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct