Clip Vision Language Transformer Models

There are 5 clip vision language models tracked. The highest-rated is jmisilo/clip-gpt-captioning at 46/100 with 118 stars.

Get all 5 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=clip-vision-language&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Model Score Tier
1 jmisilo/clip-gpt-captioning

CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.

46
Emerging
2 leaderj1001/CLIP

CLIP: Connecting Text and Image (Learning Transferable Visual Models From...

39
Emerging
3 PathologyFoundation/plip

Pathology Language and Image Pre-Training (PLIP) is the first vision and...

34
Emerging
4 kesimeg/turkish-clip

OpenAI's clip model training for Turkish language using pretrained Resnet...

13
Experimental
5 Lahdhirim/CV-image-captioning-clip-gpt2

Image caption generation using a hybrid CLIP-GPT2 architecture. CLIP encodes...

12
Experimental