cloneofsimo/paint-with-words-sd
Implementation of Paint-with-words with Stable Diffusion : method from eDiff-I that let you generate image from text-labeled segmentation map.
This project helps graphic designers, concept artists, and visual content creators precisely control AI image generation. You input a basic colored sketch (segmentation map) where each color is linked to a descriptive word, along with a full text prompt. The output is a detailed image that accurately places objects and scenes based on your sketch, enabling fine-tuned composition.
645 stars. No commits in the last 6 months.
Use this if you need to generate images from text descriptions but require precise control over object placement and composition, beyond what a simple text prompt can provide.
Not ideal if you're looking for completely random or unguided image generation, or if you prefer to edit existing images without providing a segmentation map.
Stars
645
Forks
49
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 24, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/cloneofsimo/paint-with-words-sd"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python