HillZhang1999/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
This reading list helps AI researchers and practitioners understand and address 'hallucination' in large language models. It compiles academic papers that evaluate, explain, and mitigate instances where an LLM generates plausible but incorrect or misleading information. The resource is designed for anyone working with or researching the reliability of AI-generated text.
1,078 stars. No commits in the last 6 months.
Use this if you are a researcher, AI engineer, or data scientist focusing on improving the factual accuracy and trustworthiness of large language models.
Not ideal if you are looking for an interactive tool or software to directly detect or fix hallucinations in your own LLM outputs.
Stars
1,078
Forks
54
Language
—
License
—
Category
Last pushed
Sep 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/HillZhang1999/llm-hallucination-survey"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...