ChenWu98/generative-visual-prompt
[NeurIPS 2022] (Amortized) distributional control for pre-trained generative models
This project helps graphic designers and artists create specific images using existing generative AI models like StyleGAN2 or diffusion models. You provide a text prompt or an example image, and the system generates new images that match your descriptions or attributes. It's ideal for anyone looking to control the output of AI image generators without needing to train new models from scratch.
121 stars. No commits in the last 6 months.
Use this if you want to guide a pre-trained image generator with specific text descriptions, image attributes, or even control aspects like a subject's pose, to produce desired visual content efficiently.
Not ideal if you need to generate images from scratch without any existing generative model or if your primary goal is to train a new generative model from your own datasets.
Stars
121
Forks
6
Language
Python
License
—
Category
Last pushed
Sep 04, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ChenWu98/generative-visual-prompt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...