hila-chefer/TargetCLIP
[ECCV 2022] Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.
This project helps graphic designers and artists transfer the 'essence' or distinctive style from a target image onto various source images. You provide an image (like a celebrity face or a cartoon character) that embodies a specific style, and it applies that style to other images you select, generating new, stylized images. This is for creatives looking to manipulate image styles without needing deep technical knowledge of image editing software.
231 stars. No commits in the last 6 months.
Use this if you want to quickly apply the unique visual style or 'essence' of one image to a collection of other images, especially for portraits or character art.
Not ideal if you need fine-grained control over specific features of the image manipulation, rather than a global style transfer.
Stars
231
Forks
27
Language
Jupyter Notebook
License
—
Category
Last pushed
Oct 02, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/hila-chefer/TargetCLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model