gzomer/clip-multilingual
Multilingual CLIP - Semantic Image Search in 100 languages
This project offers a way to search for images using text descriptions in over 100 languages. You input a search query in your preferred language, and it finds relevant images based on the meaning of your words, not just keywords. It's designed for anyone who needs to find specific images across different languages, such as content creators, researchers, or e-commerce managers.
No commits in the last 6 months.
Use this if you need to semantically search for images using text in many different languages, or if you want to classify images based on descriptions without explicit training.
Not ideal if you primarily need to search for images using exact keywords or filenames within a single language, or if you require image classification with highly specialized, domain-specific labels that are not easily described.
Stars
8
Forks
3
Language
Python
License
—
Category
Last pushed
Mar 01, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/gzomer/clip-multilingual"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unum-cloud/UForm
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts,...
rom1504/clip-retrieval
Easily compute clip embeddings and build a clip retrieval system with them
mazzzystar/Queryable
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
s-emanuilov/litepali
LitePali is a minimal, efficient implementation of ColPali for image retrieval and indexing,...
slavabarkov/tidy
Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized...