vectara/mirage-bench
Repository for Multililngual Generation, RAG evaluations, and surrogate judge training for Arena RAG leaderboard (NAACL'25)
This tool helps AI engineers and researchers rigorously test and compare how well different large language models (LLMs) answer questions using Retrieval-Augmented Generation (RAG) across multiple languages. It takes RAG system outputs (generated answers) and produces evaluation scores, pairwise comparisons, and insights into model performance. You would use this if you are developing or fine-tuning multilingual RAG systems and need objective performance metrics.
No commits in the last 6 months. Available on PyPI.
Use this if you need to generate, evaluate, and benchmark the quality of RAG-based answers from various LLMs, especially in diverse language contexts, to identify the best-performing models for your applications.
Not ideal if you are looking for a simple RAG application for end-users rather than a detailed, technical benchmarking suite for RAG system developers.
Stars
10
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 10, 2025
Commits (30d)
0
Dependencies
16
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/vectara/mirage-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems