kevinzakka/clip_playground
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities
This project provides interactive examples for exploring how AI can understand images and text together, even for concepts it hasn't been explicitly trained on. You input images and text descriptions, and it shows you how well the AI can recognize objects or ideas within those images based on your descriptions. Researchers, data scientists, or AI enthusiasts can use this to quickly test and visualize cutting-edge computer vision techniques.
178 stars. No commits in the last 6 months.
Use this if you want to experiment with advanced image recognition capabilities that can identify objects or themes in pictures using natural language descriptions, without needing extensive custom training.
Not ideal if you're looking for a production-ready, highly optimized solution for large-scale image classification or object detection tasks.
Stars
178
Forks
13
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kevinzakka/clip_playground"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
mlfoundations/open_clip
An open source implementation of CLIP.
noxdafox/clipspy
Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
BioMedIA-MBZUAI/FetalCLIP
Official repository of FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis