PathologyFoundation/plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
This project helps pathology researchers and clinicians analyze medical images by connecting visual patterns in pathology slides with their corresponding textual descriptions. You input digital pathology images and associated text (like diagnoses or clinical notes), and it helps extract meaningful features from both, enabling new ways to search, classify, or understand disease characteristics. Pathologists, medical researchers, and computational biologists working with histopathology will find this useful.
373 stars. No commits in the last 6 months.
Use this if you need to extract and connect rich features from both pathology images and their descriptive text for research or diagnostic support.
Not ideal if your primary goal is general image recognition outside of the pathology domain or if you only work with text data.
Stars
373
Forks
37
Language
Python
License
—
Category
Last pushed
Sep 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PathologyFoundation/plip"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jmisilo/clip-gpt-captioning
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.
leaderj1001/CLIP
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)
kesimeg/turkish-clip
OpenAI's clip model training for Turkish language using pretrained Resnet and DistilBERT
Lahdhirim/CV-image-captioning-clip-gpt2
Image caption generation using a hybrid CLIP-GPT2 architecture. CLIP encodes the image while...