zjunlp/Deco

[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation

39
/ 100
Emerging

When working with Multimodal Large Language Models (MLLMs) that combine images and text, you might notice the model sometimes makes up details or incorrectly describes what it sees in an image. This tool helps by adjusting the MLLM's internal process to reduce these 'hallucinations.' It takes your MLLM's visual input and prompts, and then produces more accurate, visually grounded text descriptions. Researchers and practitioners building or deploying MLLM applications would find this useful.

137 stars. No commits in the last 6 months.

Use this if your Multimodal Large Language Models are generating inaccurate or fabricated information when interpreting images, and you want to improve their reliability.

Not ideal if you are working exclusively with text-based language models or if your primary concern is not model hallucination.

AI model reliability image-to-text generation MLLM application development hallucination reduction computer vision
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

137

Forks

11

Language

Python

License

MIT

Last pushed

Sep 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/Deco"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.