DocAILab/XRAG

XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced Retrieval-Augmented Generation

53
/ 100
Established

This project helps developers and researchers evaluate different components of Retrieval-Augmented Generation (RAG) systems. It takes various RAG configurations, such as different retrievers, embeddings, and Large Language Models, and outputs performance metrics and visualizations. The primary users are AI/ML engineers and researchers building or optimizing RAG applications.

120 stars.

Use this if you need to systematically benchmark and understand how individual RAG components impact overall system performance.

Not ideal if you are a non-technical end-user looking for a ready-to-use RAG application rather than a tool for evaluating RAG components.

RAG evaluation LLM benchmarking NLP research AI engineering Information retrieval
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

120

Forks

18

Language

Python

License

Apache-2.0

Last pushed

Mar 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/DocAILab/XRAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.