chu2bard/ragcraft
End-to-end RAG pipeline with built-in evaluation metrics
This tool helps you quickly get answers from your own collection of documents by allowing you to feed in text and then ask questions. It takes your documents, breaks them down, organizes them, and then finds the most relevant pieces to generate an answer to your query. Anyone who needs to extract precise information from a lot of text, like a researcher, legal professional, or data analyst, would find this useful.
Use this if you need to build a system that can accurately answer questions based on your specific documents and evaluate how well those answers are being generated.
Not ideal if you're looking for a simple search engine or a tool to generate creative text without grounding in specific source documents.
Stars
11
Forks
—
Language
Python
License
MIT
Category
Last pushed
Feb 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/chu2bard/ragcraft"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems