zer0int/CLIP-gradient-ascent-embeddings

Use CLIP to create matching texts + embeddings for given images; useful for XAI, adversarial training

12
/ 100
Experimental

This tool helps researchers and AI practitioners understand how AI models 'interpret' images by generating descriptive text and numerical representations (embeddings) that best match a given image. You provide an image or a folder of images, and it outputs text files with the AI's 'opinion' about each image, alongside corresponding numerical embeddings. This is useful for those working on explainable AI (XAI) or developing AI systems that need to be robust against misleading inputs.

No commits in the last 6 months.

Use this if you need to generate text descriptions and numerical embeddings that align closely with the visual content of your images, especially for tasks related to AI explainability or adversarial robustness.

Not ideal if you are looking for human-like image captioning or if you need to use models converted to HuggingFace format, as this tool requires models in the original OpenAI/CLIP format.

Explainable AI Adversarial Machine Learning Computer Vision Research Image Understanding AI Safety
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Last pushed

Dec 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/zer0int/CLIP-gradient-ascent-embeddings"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.