BioMedIA-MBZUAI/FetalCLIP
Official repository of FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis
This project helps medical professionals, specifically sonographers and obstetricians, analyze fetal ultrasound images more accurately and efficiently. By inputting ultrasound images, the system can assist in classifying fetal planes, estimating gestational age, and detecting congenital heart defects. This tool is designed for healthcare practitioners who regularly interpret fetal ultrasound scans.
Use this if you need to quickly and reliably interpret fetal ultrasound images for clinical assessment, especially for tasks like classifying anatomical structures, estimating gestational age, or identifying potential congenital heart defects.
Not ideal if you are looking for a general medical imaging analysis tool, as this model is specifically trained and optimized only for fetal ultrasound imagery.
Stars
59
Forks
12
Language
Python
License
—
Category
Last pushed
Feb 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BioMedIA-MBZUAI/FetalCLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
mlfoundations/open_clip
An open source implementation of CLIP.
noxdafox/clipspy
Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
filipbasara0/simple-clip
A minimal, but effective implementation of CLIP (Contrastive Language-Image Pretraining) in PyTorch