AstraBert/diRAGnosis

Diagnose the performance of your RAG🩺

29
/ 100
Experimental

diRAGnosis helps AI developers and MLOps engineers assess the quality of their Retrieval Augmented Generation (RAG) systems. It takes your documents and an LLM, automatically generates evaluation questions, and then outputs detailed metrics on how well the RAG system retrieves information and generates accurate answers. This allows you to pinpoint weaknesses and improve your RAG's performance.

No commits in the last 6 months.

Use this if you are building or maintaining a RAG application and need a systematic way to measure and improve its question-answering and document retrieval capabilities.

Not ideal if you are looking for a general-purpose LLM evaluation tool that doesn't focus specifically on the retrieval component of RAG.

RAG evaluation LLM application development information retrieval assessment natural language processing AI quality assurance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

42

Forks

2

Language

Python

License

MIT

Last pushed

Apr 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/AstraBert/diRAGnosis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.