SunzeY/AlphaCLIP
[CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
This tool helps creative professionals and researchers direct AI models to focus on specific parts of an image. By providing an image along with a mask highlighting an area of interest, the AI will prioritize that region when generating descriptions or creating new images. This is ideal for designers, marketers, or researchers working with visual content who need precise control over AI interpretations.
869 stars. No commits in the last 6 months.
Use this if you need an AI to interpret or generate images with a specific focus on a particular object or region within the visual content.
Not ideal if you need a general image interpretation without any specific area of focus, or if you don't have clear masks to define regions of interest.
Stars
869
Forks
58
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jul 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SunzeY/AlphaCLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
mlfoundations/open_clip
An open source implementation of CLIP.
noxdafox/clipspy
Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
BioMedIA-MBZUAI/FetalCLIP
Official repository of FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis