Hyper-RAG and RAGGuard

Hyper-RAG prevents hallucinations upstream by improving retrieval quality through hypergraph-based ranking, while RAGGuard detects and scores hallucinations downstream after generation, making them complementary approaches that could be used sequentially in a pipeline.

Hyper-RAG
55
Established
RAGGuard
22
Experimental
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 19/25
Maintenance 13/25
Adoption 0/25
Maturity 9/25
Community 0/25
Stars: 251
Forks: 39
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars:
Forks:
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About Hyper-RAG

iMoonLab/Hyper-RAG

"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation" by Yifan Feng, Hao Hu, Xingliang Hou, Shiquan Liu, Shihui Ying, Shaoyi Du, Han Hu, and Yue Gao.

This project helps medical professionals, researchers, and educators working with large language models (LLMs) to ensure the accuracy of generated information. It takes medical domain-specific documents as input and uses them to generate more reliable, factually accurate responses from LLMs, reducing instances of fabricated or incorrect information. The primary users are those who rely on LLMs for critical tasks where accuracy is paramount, such as clinical decision support or research.

medical AI healthcare analytics clinical decision support biomedical research knowledge management

About RAGGuard

MukundaKatta/RAGGuard

RAG hallucination detection — verify LLM responses are grounded in source documents with faithfulness scoring

Scores updated daily from GitHub, PyPI, and npm data. How scores work