open_clip and clip-container

These are complements: open_clip provides the underlying model implementation that clip-container wraps into a deployable REST API service.

open_clip
73
Verified
clip-container
36
Emerging
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 20/25
Maintenance 2/25
Adoption 8/25
Maturity 8/25
Community 18/25
Stars: 13,496
Forks: 1,253
Downloads:
Commits (30d): 1
Language: Python
License:
Stars: 68
Forks: 16
Downloads:
Commits (30d): 0
Language: Python
License:
No risk flags
No License Stale 6m No Package No Dependents

About open_clip

mlfoundations/open_clip

An open source implementation of CLIP.

This project provides pre-trained models that understand both images and text, allowing you to connect what you see with descriptive phrases. You can input an image and a list of text descriptions to get back probabilities of which description best matches the image. This is ideal for researchers or developers building applications that need to categorize images based on natural language or search for images using text.

image-text-matching zero-shot-classification multimodal-search computer-vision natural-language-processing

About clip-container

svpino/clip-container

A containerized REST API around OpenAI's CLIP model.

This is a tool for developers who need to integrate image classification into their applications. It takes image URLs and a list of possible descriptive labels, then returns how likely each image is to match those labels. This allows developers to add powerful image understanding capabilities to their software without needing deep machine learning expertise.

developer-tooling image-processing application-development backend-services

Scores updated daily from GitHub, PyPI, and npm data. How scores work