NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
When working with Large Vision Language Models (LVLMs), also known as Multimodal Large Language Models (MLLMs), you might encounter 'hallucinations' where the model generates text describing things not present in the visual input. This resource provides an organized collection of state-of-the-art research papers, code, and descriptions related to detecting and mitigating these LVLM hallucinations. It's for researchers, developers, or practitioners who are building, evaluating, or deploying LVLMs and need to address their reliability.
283 stars.
Use this if you are actively working with Large Vision Language Models and need to understand, evaluate, or reduce instances where these models generate inaccurate or fabricated information from images.
Not ideal if you are looking for a pre-packaged tool or library that directly solves hallucination problems without requiring a deep dive into research papers and methodologies.
Stars
283
Forks
15
Language
—
License
—
Category
Last pushed
Feb 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/NishilBalar/Awesome-LVLM-Hallucination"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...
HillZhang1999/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI...