duyhominhnguyen/Exgra-Med
[NeurIPS 2025] ExGra-Med: Medical Multi-Modal LLM with Extended Context Alignment
EXGRA-MED helps medical professionals and researchers interpret medical images more accurately and efficiently. It takes medical images (like X-rays or scans) and text questions, then provides detailed, context-aware answers. This tool is designed for anyone needing to ask nuanced questions about medical visuals and receive highly relevant text responses.
Use this if you need a medical AI model that excels at understanding both images and text, particularly for tasks like answering questions about medical scans or powering a medical chatbot, while being more efficient with data.
Not ideal if your primary need is for a non-medical image analysis or general-purpose language model, as its strengths are specifically tailored to the medical domain.
Stars
41
Forks
2
Language
Python
License
—
Category
Last pushed
Dec 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/duyhominhnguyen/Exgra-Med"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
WangRongsheng/XrayGLM
🩺 首个会看胸部X光片的中文多模态医学大模型 | The first Chinese Medical Multimodal Model that Chest Radiographs Summarization.
Event-AHU/Medical_Image_Analysis
Foundation models based medical image analysis
cambridgeltl/visual-med-alpaca
Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the...
canyuchen/ClinicalBench
Code for the paper "ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?"
monarch-initiative/pheval.llm
Analysis of LLMs for Clinical Observations