The-Martyr/Awesome-Modality-Priors-in-MLLMs

Latest Advances on Modality Priors in Multimodal Large Language Models

27
/ 100
Experimental

This collection helps AI researchers and practitioners understand and address 'hallucinations' and biases in Multimodal Large Language Models (MLLMs). It organizes research papers that delve into how MLLMs' inherent biases (modality priors) affect their outputs. Researchers can use this to find relevant studies on mitigating these issues and evaluating model performance, ultimately leading to more reliable AI systems that combine text, images, and other data.

Use this if you are an AI researcher or machine learning engineer working with MLLMs and need to combat issues like AI hallucination, where models generate plausible but incorrect information, or want to understand their inherent biases.

Not ideal if you are looking for an off-the-shelf software tool or a guide on how to *use* MLLMs for general applications, rather than research into their fundamental behaviors and limitations.

AI-hallucination multimodal-AI AI-bias-mitigation LLM-evaluation AI-robustness
No License No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

30

Forks

2

Language

License

Last pushed

Dec 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/The-Martyr/Awesome-Modality-Priors-in-MLLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.