xingbpshen/prompt4trust
[ICCV 2025 CVAMD] The official implementation of the paper "Prompt4Trust: A Reinforcement Learning Prompt Augmentation Framework for Clinically-Aligned Confidence Calibration in Multimodal Large Language Models".
When working with multimodal large language models (MLLMs) for healthcare, it's crucial to trust their predictions, especially in sensitive clinical settings. This project helps healthcare professionals and clinical researchers ensure that MLLM outputs, which combine data like medical images and text, come with reliable confidence scores. It takes MLLM inputs and generates additional 'prompts' that guide the MLLM to provide predictions with better-aligned confidence levels, enhancing the trustworthiness of the results.
Use this if you need to ensure that the confidence scores provided by multimodal AI models in clinical applications accurately reflect their true likelihood of being correct.
Not ideal if you are working with non-clinical data or if you primarily need to improve the base accuracy of an MLLM rather than the reliability of its confidence scores.
Stars
14
Forks
—
Language
Python
License
MIT
Category
Last pushed
Dec 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/xingbpshen/prompt4trust"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ShiZhengyan/PowerfulPromptFT
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining?...
OpenDriveLab/DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
MILVLG/prophet
Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for...
deepankar27/Prompt_Organizer
Managed Prompt Engineering
mala-lab/NegPrompt
The official implementation of CVPR 24' Paper "Learning Transferable Negative Prompts for...