AQ-MedAI/PRGB
[AAAI 2026]RAG, Benchmark, robust RAG generation
When building or using Retrieval-Augmented Generation (RAG) systems, it's crucial to know how well they actually use external knowledge and provide accurate answers. This tool helps AI researchers and RAG system developers rigorously evaluate different RAG models. You input your RAG model and receive detailed reports on its performance across various complex scenarios, including how it handles noisy information and intricate reasoning tasks.
Use this if you need to objectively compare the performance of various RAG models or thoroughly test a new RAG system's ability to provide faithful and accurate responses from its retrieved documents.
Not ideal if you are looking for a tool to build or deploy a RAG system, as this is purely for performance benchmarking and evaluation.
Stars
34
Forks
4
Language
Python
License
—
Category
Last pushed
Nov 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/AQ-MedAI/PRGB"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems