AstraBert/diRAGnosis
Diagnose the performance of your RAG🩺
diRAGnosis helps AI developers and MLOps engineers assess the quality of their Retrieval Augmented Generation (RAG) systems. It takes your documents and an LLM, automatically generates evaluation questions, and then outputs detailed metrics on how well the RAG system retrieves information and generates accurate answers. This allows you to pinpoint weaknesses and improve your RAG's performance.
No commits in the last 6 months.
Use this if you are building or maintaining a RAG application and need a systematic way to measure and improve its question-answering and document retrieval capabilities.
Not ideal if you are looking for a general-purpose LLM evaluation tool that doesn't focus specifically on the retrieval component of RAG.
Stars
42
Forks
2
Language
Python
License
MIT
Category
Last pushed
Apr 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/AstraBert/diRAGnosis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Bessouat40/RAGLight
RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to...
datallmhub/ragctl
A powerful CLI tool to manage, test, and optimize RAG pipelines. Streamline your...
superagent-ai/super-rag
Super performant RAG pipelines for AI apps. Summarization, Retrieve/Rerank and Code Interpreters...
feld-m/rag_blueprint
A modular framework for building and deploying Retrieval-Augmented Generation (RAG) systems with...
McKern3l/RAGdrag
RAG pipeline security testing toolkit - 27 techniques across 6 kill chain phases, mapped to MITRE ATLAS