vectara/open-rag-eval

RAG evaluation without the need for "golden answers"

53
/ 100
Established

This tool helps RAG (Retrieval Augmented Generation) system builders and integrators assess and improve the quality of their AI-powered question-answering systems. You provide a set of questions (queries) and receive detailed performance scores and diagnostic reports, identifying how well your RAG system retrieves relevant information and generates accurate answers. This is for anyone building or maintaining a RAG system, such as AI product managers, machine learning engineers, or solution architects.

347 stars. Available on PyPI.

Use this if you need to evaluate your RAG system's performance without the manual effort of creating 'golden answers' or reference documents for every question.

Not ideal if you are looking for a basic natural language processing library or if your primary focus is on traditional search engine optimization rather than generative AI quality.

AI-powered search Generative AI evaluation RAG system optimization Customer support automation Knowledge base accuracy
Maintenance 6 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

347

Forks

21

Language

Python

License

Apache-2.0

Last pushed

Dec 15, 2025

Commits (30d)

0

Dependencies

28

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/vectara/open-rag-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.