HillZhang1999/llm-hallucination-survey

Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"

35
/ 100
Emerging

This reading list helps AI researchers and practitioners understand and address 'hallucination' in large language models. It compiles academic papers that evaluate, explain, and mitigate instances where an LLM generates plausible but incorrect or misleading information. The resource is designed for anyone working with or researching the reliability of AI-generated text.

1,078 stars. No commits in the last 6 months.

Use this if you are a researcher, AI engineer, or data scientist focusing on improving the factual accuracy and trustworthiness of large language models.

Not ideal if you are looking for an interactive tool or software to directly detect or fix hallucinations in your own LLM outputs.

AI research LLM evaluation natural language generation AI reliability text generation quality
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

1,078

Forks

54

Language

License

Last pushed

Sep 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/HillZhang1999/llm-hallucination-survey"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.