amazon-science/GaRAGe

[ACL 2025] GaRAGe: A Benchmark with Grounding Annotations for RAG Evaluation.

28
/ 100
Experimental

This benchmark helps evaluate how well your Retrieval Augmented Generation (RAG) system uses information to answer questions. It provides a large set of questions, human-written answers, and detailed annotations indicating which retrieved passages were actually relevant. This allows you to assess if your RAG system accurately identifies and uses only the necessary information from its sources, or if it provides a deflective response when information is insufficient. It's designed for researchers and practitioners working on improving AI systems that synthesize information from various documents.

No commits in the last 6 months.

Use this if you are developing or evaluating a RAG system and need a robust dataset to measure its ability to accurately ground answers in provided evidence.

Not ideal if you are looking for a dataset to train a general-purpose language model from scratch, as this is specifically designed for RAG evaluation.

AI evaluation Natural Language Processing Information Retrieval Question Answering Systems Large Language Models
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

License

Last pushed

Jun 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/amazon-science/GaRAGe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.