ouhenio/StyleGAN3-CLIP-notebooks
A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.
This helps artists, designers, and creatives generate unique images from text descriptions. You provide words or phrases describing what you want to see, and it produces a corresponding image. This tool is ideal for anyone looking to visualize abstract concepts or rapidly prototype visual ideas without needing to draw or sculpt.
215 stars. No commits in the last 6 months.
Use this if you want to explore creative visual ideas by simply typing what you envision, like "a futuristic city at sunset" or "a surreal cat playing chess."
Not ideal if you need precise control over every pixel or want to modify an existing photograph with exact changes, as the output is generative and interpretative.
Stars
215
Forks
19
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 31, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ouhenio/StyleGAN3-CLIP-notebooks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model