vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
This tool helps RAG (Retrieval Augmented Generation) system builders and integrators assess and improve the quality of their AI-powered question-answering systems. You provide a set of questions (queries) and receive detailed performance scores and diagnostic reports, identifying how well your RAG system retrieves relevant information and generates accurate answers. This is for anyone building or maintaining a RAG system, such as AI product managers, machine learning engineers, or solution architects.
347 stars. Available on PyPI.
Use this if you need to evaluate your RAG system's performance without the manual effort of creating 'golden answers' or reference documents for every question.
Not ideal if you are looking for a basic natural language processing library or if your primary focus is on traditional search engine optimization rather than generative AI quality.
Stars
347
Forks
21
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 15, 2025
Commits (30d)
0
Dependencies
28
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/vectara/open-rag-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Related tools
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems
2501Pr0ject/RAGnarok-AI
Local-first RAG evaluation framework for LLM applications. 100% local, no API keys required.