zjunlp/Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
When working with Multimodal Large Language Models (MLLMs) that combine images and text, you might notice the model sometimes makes up details or incorrectly describes what it sees in an image. This tool helps by adjusting the MLLM's internal process to reduce these 'hallucinations.' It takes your MLLM's visual input and prompts, and then produces more accurate, visually grounded text descriptions. Researchers and practitioners building or deploying MLLM applications would find this useful.
137 stars. No commits in the last 6 months.
Use this if your Multimodal Large Language Models are generating inaccurate or fabricated information when interpreting images, and you want to improve their reliability.
Not ideal if you are working exclusively with text-based language models or if your primary concern is not model hallucination.
Stars
137
Forks
11
Language
Python
License
MIT
Category
Last pushed
Sep 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/Deco"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THU-BPM/MarkLLM
MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large...
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced...
voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality...
kaist-cvml/I-HallA-v1.0
[AAAI 2025] Official Implementation of I-HallA v1.0