CLIP and simple-clip

The official OpenAI implementation serves as the reference model and weights that the minimal PyTorch reimplementation attempts to replicate for educational or resource-constrained purposes.

CLIP
60
Established
simple-clip
50
Established
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 8/25
Maturity 25/25
Community 17/25
Stars: 32,796
Forks: 3,961
Downloads:
Commits (30d): 1
Language: Jupyter Notebook
License: MIT
Stars: 42
Forks: 8
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
Stale 6m

About CLIP

openai/CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

This project helps you understand what an image depicts by matching it with descriptive text. You input an image and a list of possible text descriptions or categories, and it tells you which description is most relevant. This is ideal for anyone working with large collections of images who needs to quickly categorize, search, or understand image content without extensive manual labeling.

image-categorization visual-search content-moderation digital-asset-management data-labeling-automation

About simple-clip

filipbasara0/simple-clip

A minimal, but effective implementation of CLIP (Contrastive Language-Image Pretraining) in PyTorch

This project helps machine learning engineers and researchers quickly train powerful models that understand both images and text. You input a large dataset of images paired with their descriptions, and it outputs a trained model capable of linking visual content with natural language. This model can then perform tasks like image classification or advanced visual reasoning without needing specific, task-based training.

computer-vision natural-language-processing zero-shot-learning image-classification model-pretraining

Scores updated daily from GitHub, PyPI, and npm data. How scores work