sMamooler/CLIP_Explainability
code for studying OpenAI's CLIP explainability
This project helps researchers and developers understand how AI models like CLIP interpret images and text. It takes an image and a set of target descriptions (other images, text, or emotions) and produces visual maps that highlight which parts of the input image are most relevant to each target. Anyone working with vision-language AI models, especially those needing to explain model decisions or biases, would find this useful.
No commits in the last 6 months.
Use this if you need to visually inspect and explain why a CLIP model perceives similarities or dissimilarities between images and text, or even emotions.
Not ideal if you are looking for a tool to train new vision-language models or perform image recognition directly, rather than analyze existing model behavior.
Stars
38
Forks
5
Language
Jupyter Notebook
License
—
Last pushed
Jan 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sMamooler/CLIP_Explainability"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...