AQ-MedAI/PRGB

[AAAI 2026]RAG, Benchmark, robust RAG generation

39
/ 100
Emerging

When building or using Retrieval-Augmented Generation (RAG) systems, it's crucial to know how well they actually use external knowledge and provide accurate answers. This tool helps AI researchers and RAG system developers rigorously evaluate different RAG models. You input your RAG model and receive detailed reports on its performance across various complex scenarios, including how it handles noisy information and intricate reasoning tasks.

Use this if you need to objectively compare the performance of various RAG models or thoroughly test a new RAG system's ability to provide faithful and accurate responses from its retrieved documents.

Not ideal if you are looking for a tool to build or deploy a RAG system, as this is purely for performance benchmarking and evaluation.

AI-evaluation NLP-benchmarking RAG-system-testing model-performance natural-language-generation
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

34

Forks

4

Language

Python

License

Last pushed

Nov 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/AQ-MedAI/PRGB"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.