vectara/mirage-bench

Repository for Multililngual Generation, RAG evaluations, and surrogate judge training for Arena RAG leaderboard (NAACL'25)

37
/ 100
Emerging

This tool helps AI engineers and researchers rigorously test and compare how well different large language models (LLMs) answer questions using Retrieval-Augmented Generation (RAG) across multiple languages. It takes RAG system outputs (generated answers) and produces evaluation scores, pairwise comparisons, and insights into model performance. You would use this if you are developing or fine-tuning multilingual RAG systems and need objective performance metrics.

No commits in the last 6 months. Available on PyPI.

Use this if you need to generate, evaluate, and benchmark the quality of RAG-based answers from various LLMs, especially in diverse language contexts, to identify the best-performing models for your applications.

Not ideal if you are looking for a simple RAG application for end-users rather than a detailed, technical benchmarking suite for RAG system developers.

LLM evaluation RAG benchmarking multilingual AI natural language processing AI model comparison
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Apr 10, 2025

Commits (30d)

0

Dependencies

16

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/vectara/mirage-bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.