cambridgeltl/visual-med-alpaca
Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
This project offers a specialized AI model that understands and processes medical information from both text and images. It takes medical questions and related visual data, such as radiological images, to provide comprehensive answers. Clinical researchers, medical educators, or those analyzing medical literature and imagery for academic purposes would find this useful.
394 stars. No commits in the last 6 months.
Use this if you are a biomedical researcher or educator needing to interpret medical images alongside complex clinical text, and want a unified AI tool to assist in understanding or summarizing information for academic research.
Not ideal if you are a clinician seeking direct diagnostic support or real-time patient care recommendations, as this tool is strictly for academic research and not approved for clinical use.
Stars
394
Forks
44
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cambridgeltl/visual-med-alpaca"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
WangRongsheng/XrayGLM
🩺 首个会看胸部X光片的中文多模态医学大模型 | The first Chinese Medical Multimodal Model that Chest Radiographs Summarization.
Event-AHU/Medical_Image_Analysis
Foundation models based medical image analysis
canyuchen/ClinicalBench
Code for the paper "ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?"
monarch-initiative/pheval.llm
Analysis of LLMs for Clinical Observations
jqwangai/Medical-LLM
A Repository of Medical Large Language Models