The-Martyr/CausalMM

[ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality

32
/ 100
Emerging

This project helps AI researchers and practitioners working with Multimodal Large Language Models (MLLMs) to reduce "hallucinations" where the model generates inaccurate or irrelevant information. It takes in existing MLLMs and their training data, and outputs a more reliable model less prone to generating incorrect details based on visual or linguistic biases. It is designed for those who develop or deploy advanced AI models and need to ensure their outputs are grounded in reality.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher developing or fine-tuning multimodal large language models and are struggling with issues of factual accuracy or hallucination in model outputs.

Not ideal if you are an end-user simply looking to apply a pre-trained MLLM without needing to modify its internal workings or address core model reliability issues.

Multimodal AI Large Language Models AI Reliability Machine Learning Research Model Debugging
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

61

Forks

3

Language

Python

License

MIT

Last pushed

Jul 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/The-Martyr/CausalMM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.