HFAiLab/clip-gen
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP
This project helps you create images from text descriptions, even if you don't have existing image-text pairs for training. You provide a collection of images and, after training, you can input a text phrase to generate a new image that matches your description. This is useful for content creators, designers, or anyone needing to visualize concepts without extensive labeled datasets.
146 stars. No commits in the last 6 months.
Use this if you need to generate images from text descriptions and have a dataset of images, but lack the corresponding text labels for those images.
Not ideal if you already have large datasets of perfectly matched image-text pairs, as other methods might be more straightforward.
Stars
146
Forks
16
Language
Python
License
MIT
Category
Last pushed
Jun 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/HFAiLab/clip-gen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model