marqo-ai/marqo-FashionCLIP
State-of-the-art CLIP/SigLIP embedding models finetuned for the fashion domain. +57% increase in evaluation metrics vs FashionCLIP 2.0.
This project provides advanced AI models specifically trained for fashion product understanding. It takes fashion-related text (like product descriptions or categories) or images and helps you find relevant fashion items. This is ideal for e-commerce managers, fashion merchandisers, and anyone building intelligent fashion search or recommendation systems.
124 stars. No commits in the last 6 months.
Use this if you need highly accurate search and classification for fashion items based on text descriptions or images.
Not ideal if your primary use case is outside the fashion domain or if you do not have a need for cross-modal search capabilities.
Stars
124
Forks
14
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/marqo-ai/marqo-FashionCLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unum-cloud/UForm
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts,...
rom1504/clip-retrieval
Easily compute clip embeddings and build a clip retrieval system with them
mazzzystar/Queryable
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
s-emanuilov/litepali
LitePali is a minimal, efficient implementation of ColPali for image retrieval and indexing,...
slavabarkov/tidy
Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized...