anlp-team/LTI_Neural_Navigator
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
This system helps organizations improve the accuracy of answers generated by Large Language Models (LLMs) when querying specific, often private, knowledge bases. It takes your proprietary documents and a set of questions, then outputs reliable, domain-specific answers. This is ideal for researchers, business analysts, or compliance officers who need trustworthy information from their internal data.
No commits in the last 6 months.
Use this if you need to ensure an LLM provides factually accurate answers from your private, domain-specific documents and want to reduce 'hallucinations'.
Not ideal if you are looking for a general-purpose LLM for broad, public knowledge questions, or if you don't have specific private knowledge bases to query.
Stars
45
Forks
4
Language
HTML
License
MIT
Category
Last pushed
Mar 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/anlp-team/LTI_Neural_Navigator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
onestardao/WFGY
WFGY: open-source reasoning and debugging infrastructure for RAG and AI agents. Includes the...
KRLabsOrg/verbatim-rag
Hallucination-prevention RAG system with verbatim span extraction. Ensures all generated content...
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation"...
frmoretto/clarity-gate
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for...
project-miracl/nomiracl
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG...