open_clip and onnx_clip

The ONNX-based implementation is a specialized, lightweight alternative to the open-source PyTorch implementation, designed for environments where PyTorch dependencies are undesirable, rather than a direct competitor for all use cases.

open_clip
73
Verified
onnx_clip
39
Emerging
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 20/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 14/25
Stars: 13,496
Forks: 1,253
Downloads:
Commits (30d): 1
Language: Python
License:
Stars: 76
Forks: 11
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Archived Stale 6m No Package No Dependents

About open_clip

mlfoundations/open_clip

An open source implementation of CLIP.

This project provides pre-trained models that understand both images and text, allowing you to connect what you see with descriptive phrases. You can input an image and a list of text descriptions to get back probabilities of which description best matches the image. This is ideal for researchers or developers building applications that need to categorize images based on natural language or search for images using text.

image-text-matching zero-shot-classification multimodal-search computer-vision natural-language-processing

About onnx_clip

lakeraai/onnx_clip

An ONNX-based implementation of the CLIP model that doesn't depend on torch or torchvision.

Scores updated daily from GitHub, PyPI, and npm data. How scores work