ExcelsiorCJH/CLIP
CLIP: Learning Transferable Visual Models From Natural Language Supervision
This project helps you connect images with their natural language descriptions, enabling powerful search and classification. You provide a dataset of images paired with text, and it produces a model capable of understanding visual concepts from text. Anyone working with large image collections for tasks like content moderation, stock photography tagging, or visual search could use this.
No commits in the last 6 months.
Use this if you need to build a system that can find images based on text descriptions or categorize images using natural language, without extensive manual tagging.
Not ideal if you're looking for a ready-to-use application and don't want to engage in setting up and training a machine learning model.
Stars
7
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ExcelsiorCJH/CLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlfoundations/open_clip
An open source implementation of CLIP.
noxdafox/clipspy
Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
BioMedIA-MBZUAI/FetalCLIP
Official repository of FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis