VQGAN-CLIP and CLIP-Guided-Diffusion
These are ecosystem siblings—both are local implementations of different generative approaches (VQGAN and diffusion) that share the same CLIP guidance mechanism for steering text-to-image generation.
About VQGAN-CLIP
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
This project helps artists, designers, and creative enthusiasts generate unique images from text descriptions or existing images. You input a text prompt describing what you want to see (e.g., "A painting of an apple in a fruit bowl") and optionally an image to influence the style, and the system outputs a new, original image matching your description. It's for anyone looking to quickly visualize concepts or create digital art without needing traditional drawing skills.
About CLIP-Guided-Diffusion
nerdyrodent/CLIP-Guided-Diffusion
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
This tool helps artists, designers, and creative individuals generate unique images from text descriptions. You input a phrase or a series of words, and the system produces a corresponding image. It's for anyone who wants to quickly visualize concepts or create novel artwork using artificial intelligence.
Scores updated daily from GitHub, PyPI, and npm data. How scores work