VQGAN-CLIP and CLIP-Guided-Diffusion

These are ecosystem siblings—both are local implementations of different generative approaches (VQGAN and diffusion) that share the same CLIP guidance mechanism for steering text-to-image generation.

VQGAN-CLIP
49
Emerging
CLIP-Guided-Diffusion
44
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 23/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 2,653
Forks: 426
Downloads:
Commits (30d): 0
Language: Python
License:
Stars: 385
Forks: 48
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About VQGAN-CLIP

nerdyrodent/VQGAN-CLIP

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

This project helps artists, designers, and creative enthusiasts generate unique images from text descriptions or existing images. You input a text prompt describing what you want to see (e.g., "A painting of an apple in a fruit bowl") and optionally an image to influence the style, and the system outputs a new, original image matching your description. It's for anyone looking to quickly visualize concepts or create digital art without needing traditional drawing skills.

digital-art concept-generation graphic-design creative-imaging visual-prototyping

About CLIP-Guided-Diffusion

nerdyrodent/CLIP-Guided-Diffusion

Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

This tool helps artists, designers, and creative individuals generate unique images from text descriptions. You input a phrase or a series of words, and the system produces a corresponding image. It's for anyone who wants to quickly visualize concepts or create novel artwork using artificial intelligence.

digital-art concept-generation creative-design text-to-image AI-art

Scores updated daily from GitHub, PyPI, and npm data. How scores work