taco-group/Re-Align

[EMNLP'25] A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.

31
/ 100
Emerging

This project helps AI developers and researchers improve the reliability of Vision Language Models (VLMs) by reducing "hallucinations"—instances where the model generates factually incorrect information based on an image. It takes an existing VLM and training data (images and text) and outputs a fine-tuned VLM that is less prone to these errors and performs better on general visual question-answering tasks. It is designed for those who develop or deploy VLMs and need to ensure their responses are accurate and trustworthy.

No commits in the last 6 months.

Use this if you are a VLM developer or researcher focused on enhancing model accuracy and mitigating hallucinations in your vision-language AI applications.

Not ideal if you are an end-user without a technical background in AI model training or if you're looking for a pre-trained, ready-to-use VLM without needing to fine-tune it.

AI-model-development vision-language-models hallucination-mitigation AI-safety model-alignment
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

50

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Aug 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/taco-group/Re-Align"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.