2501Pr0ject/RAGnarok-AI

Local-first RAG evaluation framework for LLM applications. 100% local, no API keys required.

46
/ 100
Emerging

This project helps AI developers and engineers assess how well their Retrieval Augmented Generation (RAG) systems perform. You provide your RAG pipeline and a knowledge base, and it generates test questions and evaluates key metrics like relevance and faithfulness, giving you a clear summary of your RAG system's quality. This is ideal for anyone building or maintaining LLM applications who needs to ensure their RAG components are accurate and reliable.

Available on PyPI.

Use this if you are developing or deploying RAG-based LLM applications and need a fast, local, and reliable way to evaluate their performance without relying on external APIs.

Not ideal if you are not building RAG systems or primarily work with non-LLM machine learning models.

LLM development RAG systems AI evaluation NLP engineering MLOps
Maintenance 10 / 25
Adoption 5 / 25
Maturity 20 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

AGPL-3.0

Last pushed

Feb 28, 2026

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/2501Pr0ject/RAGnarok-AI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.