DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced Retrieval-Augmented Generation
This project helps developers and researchers evaluate different components of Retrieval-Augmented Generation (RAG) systems. It takes various RAG configurations, such as different retrievers, embeddings, and Large Language Models, and outputs performance metrics and visualizations. The primary users are AI/ML engineers and researchers building or optimizing RAG applications.
120 stars.
Use this if you need to systematically benchmark and understand how individual RAG components impact overall system performance.
Not ideal if you are a non-technical end-user looking for a ready-to-use RAG application rather than a tool for evaluating RAG components.
Stars
120
Forks
18
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/DocAILab/XRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems
2501Pr0ject/RAGnarok-AI
Local-first RAG evaluation framework for LLM applications. 100% local, no API keys required.