PathologyFoundation/plip

Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.

34
/ 100
Emerging

This project helps pathology researchers and clinicians analyze medical images by connecting visual patterns in pathology slides with their corresponding textual descriptions. You input digital pathology images and associated text (like diagnoses or clinical notes), and it helps extract meaningful features from both, enabling new ways to search, classify, or understand disease characteristics. Pathologists, medical researchers, and computational biologists working with histopathology will find this useful.

373 stars. No commits in the last 6 months.

Use this if you need to extract and connect rich features from both pathology images and their descriptive text for research or diagnostic support.

Not ideal if your primary goal is general image recognition outside of the pathology domain or if you only work with text data.

pathology-ai histopathology-analysis medical-imaging disease-research digital-pathology
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

373

Forks

37

Language

Python

License

Last pushed

Sep 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PathologyFoundation/plip"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.