open_clip and CLIP

The open_clip project is a community-maintained reimplementation and extension of the original OpenAI CLIP model, making them ecosystem siblings where open_clip serves as the more actively maintained and production-ready alternative to the original research codebase.

open_clip
73
Verified
CLIP
60
Established
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 20/25
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 13,496
Forks: 1,253
Downloads:
Commits (30d): 1
Language: Python
License:
Stars: 32,796
Forks: 3,961
Downloads:
Commits (30d): 1
Language: Jupyter Notebook
License: MIT
No risk flags
No Package No Dependents

About open_clip

mlfoundations/open_clip

An open source implementation of CLIP.

This project provides pre-trained models that understand both images and text, allowing you to connect what you see with descriptive phrases. You can input an image and a list of text descriptions to get back probabilities of which description best matches the image. This is ideal for researchers or developers building applications that need to categorize images based on natural language or search for images using text.

image-text-matching zero-shot-classification multimodal-search computer-vision natural-language-processing

About CLIP

openai/CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

This project helps you understand what an image depicts by matching it with descriptive text. You input an image and a list of possible text descriptions or categories, and it tells you which description is most relevant. This is ideal for anyone working with large collections of images who needs to quickly categorize, search, or understand image content without extensive manual labeling.

image-categorization visual-search content-moderation digital-asset-management data-labeling-automation

Scores updated daily from GitHub, PyPI, and npm data. How scores work