0xshre/rag-evaluation
A QA RAG system that uses a custom chromadb to retrieve relevant passages and then uses an LLM to generate the answer.
This project helps evaluate and improve question-answering systems built using Retrieval-Augmented Generation (RAG). You feed in documents and questions, and it generates answers while also providing a detailed report on how accurate and relevant the answers are. It's for data scientists and AI engineers who are developing or fine-tuning RAG-based chatbots or knowledge retrieval tools.
No commits in the last 6 months.
Use this if you are building or evaluating a RAG-based question-answering system and need to understand its performance in terms of answer quality and context utilization.
Not ideal if you are looking for a ready-to-use, off-the-shelf chatbot without needing to delve into RAG system performance metrics.
Stars
17
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/0xshre/rag-evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GrapeCity-AI/gc-qa-rag
A RAG (Retrieval-Augmented Generation) solution Based on Advanced Pre-generated QA Pairs. 基于高级...
UKPLab/PeerQA
Code and Data for PeerQA: A Scientific Question Answering Dataset from Peer Reviews, NAACL 2025
Arfazrll/RAG-DocsInsight-Engine
Retrieval Augmented Generation (RAG) engine for intelligent document analysis. integrating LLM,...
faerber-lab/SQuAI
SQuAI: Scientific Question-Answering with Multi-Agent Retrieval-Augmented Generation (CIKM'25)
Vbj1808/Dokis
Lightweight RAG provenance middleware. Verifies every claim in an LLM response is grounded in a...