XRAG and rageval

XRAG
53
Established
rageval
36
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 10/25
Stars: 120
Forks: 18
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 170
Forks: 10
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About XRAG

DocAILab/XRAG

XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced Retrieval-Augmented Generation

This project helps developers and researchers evaluate different components of Retrieval-Augmented Generation (RAG) systems. It takes various RAG configurations, such as different retrievers, embeddings, and Large Language Models, and outputs performance metrics and visualizations. The primary users are AI/ML engineers and researchers building or optimizing RAG applications.

RAG evaluation LLM benchmarking NLP research AI engineering Information retrieval

About rageval

gomate-community/rageval

Evaluation tools for Retrieval-augmented Generation (RAG) methods.

This tool helps evaluate the performance of your Retrieval-Augmented Generation (RAG) systems. It takes the outputs from various stages of your RAG pipeline—like rewritten queries, retrieved documents, and generated answers—and provides comprehensive scores on how well your system is performing across aspects like answer correctness, factual consistency, and document relevance. It is designed for AI/ML engineers or researchers building and refining RAG-based applications.

AI-evaluation NLP-benchmarking Generative-AI-testing LLM-performance Information-retrieval-quality

Scores updated daily from GitHub, PyPI, and npm data. How scores work