sMamooler/CLIP_Explainability

code for studying OpenAI's CLIP explainability

27
/ 100
Experimental

This project helps researchers and developers understand how AI models like CLIP interpret images and text. It takes an image and a set of target descriptions (other images, text, or emotions) and produces visual maps that highlight which parts of the input image are most relevant to each target. Anyone working with vision-language AI models, especially those needing to explain model decisions or biases, would find this useful.

No commits in the last 6 months.

Use this if you need to visually inspect and explain why a CLIP model perceives similarities or dissimilarities between images and text, or even emotions.

Not ideal if you are looking for a tool to train new vision-language models or perform image recognition directly, rather than analyze existing model behavior.

AI explainability computer vision research natural language processing model interpretation AI ethics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

38

Forks

5

Language

Jupyter Notebook

License

Last pushed

Jan 07, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sMamooler/CLIP_Explainability"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.