nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
This project helps artists, designers, and creative enthusiasts generate unique images from text descriptions or existing images. You input a text prompt describing what you want to see (e.g., "A painting of an apple in a fruit bowl") and optionally an image to influence the style, and the system outputs a new, original image matching your description. It's for anyone looking to quickly visualize concepts or create digital art without needing traditional drawing skills.
2,653 stars. No commits in the last 6 months.
Use this if you want to create original digital art or conceptual images by simply describing them in text, or if you want to apply a specific artistic style to an existing image.
Not ideal if you need fine-grained control over every pixel, precise photo editing, or if you don't have access to a powerful computer with a dedicated graphics card.
Stars
2,653
Forks
426
Language
Python
License
—
Category
Last pushed
Oct 02, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nerdyrodent/VQGAN-CLIP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model
eps696/aphantasia
CLIP + FFT/DWT/RGB = text to image/video