2501Pr0ject/RAGnarok-AI
Local-first RAG evaluation framework for LLM applications. 100% local, no API keys required.
This project helps AI developers and engineers assess how well their Retrieval Augmented Generation (RAG) systems perform. You provide your RAG pipeline and a knowledge base, and it generates test questions and evaluates key metrics like relevance and faithfulness, giving you a clear summary of your RAG system's quality. This is ideal for anyone building or maintaining LLM applications who needs to ensure their RAG components are accurate and reliable.
Available on PyPI.
Use this if you are developing or deploying RAG-based LLM applications and need a fast, local, and reliable way to evaluate their performance without relying on external APIs.
Not ideal if you are not building RAG systems or primarily work with non-LLM machine learning models.
Stars
13
Forks
2
Language
Python
License
AGPL-3.0
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/2501Pr0ject/RAGnarok-AI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems