RulinShao/RAG-evaluation-harnesses

An evaluation suite for Retrieval-Augmented Generation (RAG).

35
/ 100
Emerging

This project helps evaluate how well your Retrieval-Augmented Generation (RAG) system performs on various question-answering tasks. You provide your RAG model's retrieved documents and the questions, and it outputs performance scores. This tool is for researchers, developers, or MLOps engineers who are building and fine-tuning RAG systems and need to rigorously benchmark their effectiveness.

No commits in the last 6 months.

Use this if you are developing a RAG system and need to systematically test its accuracy and performance across established benchmarks.

Not ideal if you are looking for a tool to deploy or manage your RAG system in a production environment, as this focuses solely on evaluation.

RAG-evaluation LLM-benchmarking NLP-research AI-model-testing information-retrieval
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

23

Forks

3

Language

Python

License

MIT

Last pushed

Apr 26, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/RulinShao/RAG-evaluation-harnesses"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.