zer0int/CLIP-gradient-ascent-embeddings
Use CLIP to create matching texts + embeddings for given images; useful for XAI, adversarial training
This tool helps researchers and AI practitioners understand how AI models 'interpret' images by generating descriptive text and numerical representations (embeddings) that best match a given image. You provide an image or a folder of images, and it outputs text files with the AI's 'opinion' about each image, alongside corresponding numerical embeddings. This is useful for those working on explainable AI (XAI) or developing AI systems that need to be robust against misleading inputs.
No commits in the last 6 months.
Use this if you need to generate text descriptions and numerical embeddings that align closely with the visual content of your images, especially for tasks related to AI explainability or adversarial robustness.
Not ideal if you are looking for human-like image captioning or if you need to use models converted to HuggingFace format, as this tool requires models in the original OpenAI/CLIP format.
Stars
7
Forks
—
Language
Python
License
—
Category
Last pushed
Dec 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/zer0int/CLIP-gradient-ascent-embeddings"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unum-cloud/UForm
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts,...
rom1504/clip-retrieval
Easily compute clip embeddings and build a clip retrieval system with them
mazzzystar/Queryable
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
s-emanuilov/litepali
LitePali is a minimal, efficient implementation of ColPali for image retrieval and indexing,...
slavabarkov/tidy
Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized...