clifs and clip-image-search
Both implement CLIP-based search over visual content, but they target different modalities: one searches video frames while the other searches static images, making them **competitors** for the same underlying use case (multimodal retrieval) rather than complements or siblings.
About clifs
johanmodin/clifs
Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP
This tool helps anyone who needs to find specific moments in video footage without manually watching hours of content. You input your video files and then simply type in a description of what you're looking for, like "a white BMW car" or "a bicyclist with a blue shirt." It then returns video frames that best match your text description. This is ideal for security analysts, media researchers, or anyone reviewing video for specific events or objects.
About clip-image-search
kingyiusuen/clip-image-search
Search images with a text or image query, using Open AI's pretrained CLIP model.
This tool helps you quickly find specific images within a large collection using either a descriptive text phrase or another image as your search query. You provide a collection of images and then search them by describing what you're looking for, or by providing an example image. This is ideal for anyone managing large visual assets, like content creators, marketers, or photo librarians, who need to locate relevant visuals without manually tagging or sifting through folders.
Scores updated daily from GitHub, PyPI, and npm data. How scores work