cambridgeltl/visual-med-alpaca

Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.

43
/ 100
Emerging

This project offers a specialized AI model that understands and processes medical information from both text and images. It takes medical questions and related visual data, such as radiological images, to provide comprehensive answers. Clinical researchers, medical educators, or those analyzing medical literature and imagery for academic purposes would find this useful.

394 stars. No commits in the last 6 months.

Use this if you are a biomedical researcher or educator needing to interpret medical images alongside complex clinical text, and want a unified AI tool to assist in understanding or summarizing information for academic research.

Not ideal if you are a clinician seeking direct diagnostic support or real-time patient care recommendations, as this tool is strictly for academic research and not approved for clinical use.

biomedical-research medical-imaging-analysis clinical-question-answering medical-education biomedical-nlp
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

394

Forks

44

Language

Python

License

Apache-2.0

Last pushed

Mar 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cambridgeltl/visual-med-alpaca"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.