iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation" by Yifan Feng, Hao Hu, Xingliang Hou, Shiquan Liu, Shihui Ying, Shaoyi Du, Han Hu, and Yue Gao.
This project helps medical professionals, researchers, and educators working with large language models (LLMs) to ensure the accuracy of generated information. It takes medical domain-specific documents as input and uses them to generate more reliable, factually accurate responses from LLMs, reducing instances of fabricated or incorrect information. The primary users are those who rely on LLMs for critical tasks where accuracy is paramount, such as clinical decision support or research.
251 stars.
Use this if you need to integrate LLMs into high-stakes environments, particularly in fields like medicine, where factual accuracy and avoiding 'hallucinations' are critical.
Not ideal if your primary concern is generating creative content or if factual accuracy is a secondary consideration.
Stars
251
Forks
39
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/iMoonLab/Hyper-RAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
onestardao/WFGY
WFGY: open-source reasoning and debugging infrastructure for RAG and AI agents. Includes the...
KRLabsOrg/verbatim-rag
Hallucination-prevention RAG system with verbatim span extraction. Ensures all generated content...
frmoretto/clarity-gate
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for...
project-miracl/nomiracl
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG...
chensyCN/LogicRAG
Source code of LogicRAG at AAAI'26.