OpenAI-CLIP and simple-clip
About OpenAI-CLIP
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
This project helps researchers and engineers build models that understand both images and text together. It takes a collection of images and their descriptive captions, processing them to create a model that can connect what's seen in a picture with what's said in a sentence. This is useful for anyone working on tasks like searching images using text descriptions, or classifying images based on natural language.
About simple-clip
filipbasara0/simple-clip
A minimal, but effective implementation of CLIP (Contrastive Language-Image Pretraining) in PyTorch
This project helps machine learning engineers and researchers quickly train powerful models that understand both images and text. You input a large dataset of images paired with their descriptions, and it outputs a trained model capable of linking visual content with natural language. This model can then perform tasks like image classification or advanced visual reasoning without needing specific, task-based training.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work