Vbj1808/Dokis

Lightweight RAG provenance middleware. Verifies every claim in an LLM response is grounded in a retrieved source - without an LLM call.

37
/ 100
Emerging

When building applications that use Large Language Models (LLMs) to answer questions based on retrieved documents, this tool helps ensure the LLM's responses are truthful and fully supported by the provided sources. It takes your retrieved document chunks and the LLM's generated answer, then tells you exactly which parts of the answer are directly cited from your documents and which are not. This is for developers building RAG (Retrieval Augmented Generation) applications who need to verify the factual basis of LLM outputs in real-time.

Available on PyPI.

Use this if you are developing an LLM application and need to prevent the LLM from generating responses with claims that aren't directly supported by your source documents, or if you need to enforce that only content from specific, trusted domains can be used.

Not ideal if you are looking for an offline evaluation tool for your RAG pipeline, or if you primarily need general content safety and policy enforcement like toxicity filtering.

LLM application development RAG pipeline Generative AI Source verification Content provenance
Maintenance 13 / 25
Adoption 6 / 25
Maturity 18 / 25
Community 0 / 25

How are scores calculated?

Stars

18

Forks

Language

Python

License

MIT

Last pushed

Mar 27, 2026

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Vbj1808/Dokis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.