leaderj1001/CLIP
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)
This project helps machine learning engineers and researchers evaluate how well a model connects images with text descriptions. You input an image dataset and the corresponding text labels, and it outputs accuracy metrics showing how effectively the model understands and links visual content with natural language. This is useful for those developing or assessing AI models that need to interpret both images and text.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who needs to evaluate the performance of models designed to link visual information with textual descriptions.
Not ideal if you are looking for a ready-to-use application for everyday tasks like image captioning or image search without programming.
Stars
83
Forks
11
Language
Python
License
MIT
Category
Last pushed
Jan 19, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/leaderj1001/CLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jmisilo/clip-gpt-captioning
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.
PathologyFoundation/plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation...
kesimeg/turkish-clip
OpenAI's clip model training for Turkish language using pretrained Resnet and DistilBERT
Lahdhirim/CV-image-captioning-clip-gpt2
Image caption generation using a hybrid CLIP-GPT2 architecture. CLIP encodes the image while...