Vbj1808/Dokis
Lightweight RAG provenance middleware. Verifies every claim in an LLM response is grounded in a retrieved source - without an LLM call.
When building applications that use Large Language Models (LLMs) to answer questions based on retrieved documents, this tool helps ensure the LLM's responses are truthful and fully supported by the provided sources. It takes your retrieved document chunks and the LLM's generated answer, then tells you exactly which parts of the answer are directly cited from your documents and which are not. This is for developers building RAG (Retrieval Augmented Generation) applications who need to verify the factual basis of LLM outputs in real-time.
Available on PyPI.
Use this if you are developing an LLM application and need to prevent the LLM from generating responses with claims that aren't directly supported by your source documents, or if you need to enforce that only content from specific, trusted domains can be used.
Not ideal if you are looking for an offline evaluation tool for your RAG pipeline, or if you primarily need general content safety and policy enforcement like toxicity filtering.
Stars
18
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 27, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Vbj1808/Dokis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GrapeCity-AI/gc-qa-rag
A RAG (Retrieval-Augmented Generation) solution Based on Advanced Pre-generated QA Pairs. 基于高级...
UKPLab/PeerQA
Code and Data for PeerQA: A Scientific Question Answering Dataset from Peer Reviews, NAACL 2025
Arfazrll/RAG-DocsInsight-Engine
Retrieval Augmented Generation (RAG) engine for intelligent document analysis. integrating LLM,...
faerber-lab/SQuAI
SQuAI: Scientific Question-Answering with Multi-Agent Retrieval-Augmented Generation (CIKM'25)
robert-mcdermott/rag_webquery
A command line utility that queries websites for answers using a local LLM