EdinburghNLP/awesome-hallucination-detection

List of papers on hallucination detection in LLMs.

56
/ 100
Established

This resource provides a curated list of research papers focused on detecting and mitigating 'hallucinations' in AI models, especially those that generate text or combine text with images. It details what goes into these detection methods (like specific metrics and datasets) and what comes out (new techniques or improved model reliability). AI researchers, machine learning engineers, and developers working on large language models and multimodal AI will find this useful for staying updated on the latest advancements.

1,060 stars. Actively maintained with 6 commits in the last 30 days.

Use this if you are a researcher or developer actively working on improving the factual accuracy and reliability of large AI models, and need to quickly find relevant academic papers and methods.

Not ideal if you are looking for ready-to-use software tools or libraries to implement hallucination detection without needing to delve into academic papers.

AI research natural language processing multimodal AI AI model evaluation AI factuality
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

1,060

Forks

81

Language

License

Apache-2.0

Last pushed

Jan 11, 2026

Commits (30d)

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/EdinburghNLP/awesome-hallucination-detection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.