xingbpshen/prompt4trust

[ICCV 2025 CVAMD] The official implementation of the paper "Prompt4Trust: A Reinforcement Learning Prompt Augmentation Framework for Clinically-Aligned Confidence Calibration in Multimodal Large Language Models".

27
/ 100
Experimental

When working with multimodal large language models (MLLMs) for healthcare, it's crucial to trust their predictions, especially in sensitive clinical settings. This project helps healthcare professionals and clinical researchers ensure that MLLM outputs, which combine data like medical images and text, come with reliable confidence scores. It takes MLLM inputs and generates additional 'prompts' that guide the MLLM to provide predictions with better-aligned confidence levels, enhancing the trustworthiness of the results.

Use this if you need to ensure that the confidence scores provided by multimodal AI models in clinical applications accurately reflect their true likelihood of being correct.

Not ideal if you are working with non-clinical data or if you primarily need to improve the base accuracy of an MLLM rather than the reliability of its confidence scores.

clinical decision support medical imaging analysis healthcare AI trustworthy AI diagnostic assistance
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Python

License

MIT

Last pushed

Dec 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/xingbpshen/prompt4trust"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.