halixness/understanding-CLIP
Repo from the "Learning with limited labeled data" seminar @ Uni of Tuebingen. A collection of notes, notebooks and slideshows to understand CLIP and Natural language supervision.
This project provides resources to help you understand CLIP and how it uses natural language to interpret images. It explains how you can input images and text descriptions to link them, or even generate text from images, helping you build systems that understand visual content without extensive manual labeling. This is for researchers and practitioners exploring advanced computer vision, especially those interested in connecting images with descriptive language.
No commits in the last 6 months.
Use this if you need to grasp how models can learn to recognize visual content using natural language descriptions, enabling zero-shot recognition or image-text linking.
Not ideal if you're looking for a plug-and-play solution for a specific image analysis task rather than a deep dive into the underlying research and concepts.
Stars
17
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Apr 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/halixness/understanding-CLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlfoundations/open_clip
An open source implementation of CLIP.
noxdafox/clipspy
Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
BioMedIA-MBZUAI/FetalCLIP
Official repository of FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis