Ruiyang-061X/Uncertainty-o

✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".

26
/ 100
Experimental

This project helps researchers and developers working with Large Multimodal Models (LMMs) understand how confident these models are in their responses. It takes a multimodal prompt (like an image and text query) and an LMM, then outputs a quantifiable measure of the model's uncertainty about its answer. This is useful for anyone evaluating LMM performance, especially in scenarios where accuracy and reliability are critical.

No commits in the last 6 months.

Use this if you need to detect and understand 'hallucinations' or unreliable outputs from Large Multimodal Models when processing mixed image and text inputs.

Not ideal if you are working with purely text-based models or if you need a solution that directly improves LMM accuracy rather than just measuring its uncertainty.

AI model evaluation multimodal AI hallucination detection model reliability AI safety
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

18

Forks

3

Language

Python

License

Last pushed

Mar 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Ruiyang-061X/Uncertainty-o"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.