svpino/clip-container
A containerized REST API around OpenAI's CLIP model.
This is a tool for developers who need to integrate image classification into their applications. It takes image URLs and a list of possible descriptive labels, then returns how likely each image is to match those labels. This allows developers to add powerful image understanding capabilities to their software without needing deep machine learning expertise.
No commits in the last 6 months.
Use this if you are a developer building an application that needs to automatically identify objects or concepts in images from a list of predefined text descriptions.
Not ideal if you are an end-user looking for a ready-to-use application to organize or search images directly, as this requires coding knowledge to implement.
Stars
68
Forks
16
Language
Python
License
—
Category
Last pushed
Oct 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/svpino/clip-container"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
mlfoundations/open_clip
An open source implementation of CLIP.
noxdafox/clipspy
Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
BioMedIA-MBZUAI/FetalCLIP
Official repository of FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis