frmoretto/clarity-gate
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for your documents that are going to be provided to LLMs or RAG systems. Place automatically the missing uncertainty markers to avoid confident hallucinations. HITL for non-directly verifiable claims.
This project helps anyone preparing documents for AI systems (like chatbots or knowledge bases) ensure the information is presented clearly and accurately. It takes your raw documents, such as drafts, meeting notes, or reports, and automatically flags or adds markers to distinguish facts from projections, hypotheses, or unproven assertions. The output is a "Clarity-Gated Document" that prevents AI from confidently stating guesses as facts, which is ideal for founders, researchers, and teams concerned with information quality.
Available on PyPI.
Use this if your RAG corpus includes drafts, notes, or user-provided content and you need to automatically enforce document quality and prevent AI from misinterpreting claims.
Not ideal if you're solely looking for tools to resolve conflicting information from multiple verified sources after data has already been ingested.
Stars
23
Forks
2
Language
Python
License
—
Category
Last pushed
Mar 02, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/frmoretto/clarity-gate"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
onestardao/WFGY
WFGY: open-source reasoning and debugging infrastructure for RAG and AI agents. Includes the...
KRLabsOrg/verbatim-rag
Hallucination-prevention RAG system with verbatim span extraction. Ensures all generated content...
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation"...
project-miracl/nomiracl
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG...
chensyCN/LogicRAG
Source code of LogicRAG at AAAI'26.