Hyper-RAG and RAGGuard
Hyper-RAG prevents hallucinations upstream by improving retrieval quality through hypergraph-based ranking, while RAGGuard detects and scores hallucinations downstream after generation, making them complementary approaches that could be used sequentially in a pipeline.
About Hyper-RAG
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation" by Yifan Feng, Hao Hu, Xingliang Hou, Shiquan Liu, Shihui Ying, Shaoyi Du, Han Hu, and Yue Gao.
This project helps medical professionals, researchers, and educators working with large language models (LLMs) to ensure the accuracy of generated information. It takes medical domain-specific documents as input and uses them to generate more reliable, factually accurate responses from LLMs, reducing instances of fabricated or incorrect information. The primary users are those who rely on LLMs for critical tasks where accuracy is paramount, such as clinical decision support or research.
About RAGGuard
MukundaKatta/RAGGuard
RAG hallucination detection — verify LLM responses are grounded in source documents with faithfulness scoring
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work