naver/bergen

Benchmarking library for RAG

52
/ 100
Established

BERGEN helps evaluate how well your Retrieval-Augmented Generation (RAG) system answers questions, especially when comparing different components. You input a RAG system configuration (like which retriever or language model to use) and a dataset of questions, and it outputs performance metrics like answer accuracy and relevance scores. This is for AI researchers and practitioners who build or optimize RAG-based question-answering systems.

261 stars.

Use this if you need to systematically compare different RAG components (like retrievers, rerankers, or large language models) and understand their impact on question-answering performance.

Not ideal if you're looking for a simple, out-of-the-box RAG solution for immediate deployment rather than a tool for performance comparison and analysis.

AI-research natural-language-processing question-answering-systems RAG-evaluation LLM-benchmarking
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

261

Forks

31

Language

Jupyter Notebook

License

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/naver/bergen"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.