AQ-MedAI/RagQALeaderboard
RAG-QA Leaderboard
This tool helps researchers and developers objectively compare the performance of different Retrieval-Augmented Generation (RAG) systems. You input your RAG model (or a set of models) and a specific question-answering dataset, and it outputs detailed, standardized evaluation reports. This allows you to understand how well your RAG system answers questions compared to others, using consistent metrics.
Use this if you need a fair and reproducible way to benchmark your RAG models against established datasets and compare their performance.
Not ideal if you are looking for a tool to build or train RAG models, as this is solely for evaluation and comparison.
Stars
25
Forks
1
Language
Python
License
—
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/AQ-MedAI/RagQALeaderboard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems