rkhamilton/vqgan-clip-generator
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
This tool helps artists, designers, and creatives generate unique images and videos from text descriptions or existing images. You input a text prompt (like "A pastoral landscape painting by Rembrandt") or an image, and it outputs a brand new image or a stylized video. It's designed for anyone looking to quickly visualize concepts or transform videos with AI-driven artistic styles.
112 stars. No commits in the last 6 months.
Use this if you want to generate artwork, design concepts, or styled videos directly from descriptive text or by transforming existing visual media.
Not ideal if you need ultra-high-resolution images without additional upscaling steps or if you require extremely fast generation without access to a powerful GPU.
Stars
112
Forks
26
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/rkhamilton/vqgan-clip-generator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
tnwei/vqgan-clip-app
Local image generation using VQGAN-CLIP or CLIP guided diffusion
QuenithAI/Video-Generation-Paper-List
Tracking the latest and greatest research papers on video generation.
torrinworx/Cozy-Auto-Texture
A Blender add-on for generating free textures using the Stable Diffusion AI text to image model.
sbmagar13/VQGAN-CLIP-Text-to-Image
Text-to-Image Synthesis using Multimodal (VQGAN + CLIP) Architectures
Jaso1024/Refining-Generated-Videos
IEEE 2023 | REGIS: Refining Generated Videos via Iterative Stylistic Remodeling