The-Martyr/CausalMM
[ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality
This project helps AI researchers and practitioners working with Multimodal Large Language Models (MLLMs) to reduce "hallucinations" where the model generates inaccurate or irrelevant information. It takes in existing MLLMs and their training data, and outputs a more reliable model less prone to generating incorrect details based on visual or linguistic biases. It is designed for those who develop or deploy advanced AI models and need to ensure their outputs are grounded in reality.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher developing or fine-tuning multimodal large language models and are struggling with issues of factual accuracy or hallucination in model outputs.
Not ideal if you are an end-user simply looking to apply a pre-trained MLLM without needing to modify its internal workings or address core model reliability issues.
Stars
61
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jul 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/The-Martyr/CausalMM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models