Armaggheddon/ClipServe
🚀 ClipServe: A fast API server for embedding text, images, and performing zero-shot classification using OpenAI’s CLIP model. Powered by FastAPI, Redis, and CUDA for lightning-fast, scalable AI applications. Transform texts and images into embeddings or classify images with custom labels—all through easy-to-use endpoints. 🌐📊
ClipServe helps you quickly understand and categorize large collections of text and images. You provide text descriptions or images, and it processes them to identify similarities or classify images into predefined categories, even without prior training on those specific categories. This tool is for data scientists, machine learning engineers, and researchers who need fast, scalable AI capabilities for multimodal data analysis.
No commits in the last 6 months.
Use this if you need to rapidly process and analyze text and image data, or classify images based on simple text descriptions, without extensive machine learning setup.
Not ideal if you require a simple desktop application for occasional use or are working with highly specialized data that requires custom, domain-specific model fine-tuning.
Stars
8
Forks
1
Language
Python
License
MIT
Category
Last pushed
Sep 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Armaggheddon/ClipServe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OFA-Sys/Chinese-CLIP
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Kaushalya/medclip
A multi-modal CLIP model trained on the medical dataset ROCO
kastalimohammed1965/CLIP-fine-tune-registers-gated
Vision Transformers Needs Registers. And Gated MLPs. And +20M params. Tiny modality gap ensues!
BUAADreamer/SPN4CIR
[ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives...
clip-italian/clip-italian
CLIP (Contrastive Language–Image Pre-training) for Italian