AQ-MedAI/RagQALeaderboard

RAG-QA Leaderboard

36
/ 100
Emerging

This tool helps researchers and developers objectively compare the performance of different Retrieval-Augmented Generation (RAG) systems. You input your RAG model (or a set of models) and a specific question-answering dataset, and it outputs detailed, standardized evaluation reports. This allows you to understand how well your RAG system answers questions compared to others, using consistent metrics.

Use this if you need a fair and reproducible way to benchmark your RAG models against established datasets and compare their performance.

Not ideal if you are looking for a tool to build or train RAG models, as this is solely for evaluation and comparison.

AI evaluation Natural Language Processing question answering large language models machine learning benchmarking
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 4 / 25

How are scores calculated?

Stars

25

Forks

1

Language

Python

License

Last pushed

Jan 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/AQ-MedAI/RagQALeaderboard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.