naver/bergen
Benchmarking library for RAG
BERGEN helps evaluate how well your Retrieval-Augmented Generation (RAG) system answers questions, especially when comparing different components. You input a RAG system configuration (like which retriever or language model to use) and a dataset of questions, and it outputs performance metrics like answer accuracy and relevance scores. This is for AI researchers and practitioners who build or optimize RAG-based question-answering systems.
261 stars.
Use this if you need to systematically compare different RAG components (like retrievers, rerankers, or large language models) and understand their impact on question-answering performance.
Not ideal if you're looking for a simple, out-of-the-box RAG solution for immediate deployment rather than a tool for performance comparison and analysis.
Stars
261
Forks
31
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/naver/bergen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
Renumics/renumics-rag
Visualization for a Retrieval-Augmented Generation (RAG) Assistant 🤖❤️📚
VectorInstitute/retrieval-augmented-generation
Reference Implementations for the RAG bootcamp
KalyanKS-NLP/rag-zero-to-hero-guide
Comprehensive guide to learn RAG from basics to advanced.
alan-turing-institute/t0-1
Application of Retrieval-Augmented Reasoning on a domain-specific body of knowledge
aihpi/workshop-rag
Retrieval Augmented Generation and Semantic-search Tools