cloneofsimo/paint-with-words-sd

Implementation of Paint-with-words with Stable Diffusion : method from eDiff-I that let you generate image from text-labeled segmentation map.

42
/ 100
Emerging

This project helps graphic designers, concept artists, and visual content creators precisely control AI image generation. You input a basic colored sketch (segmentation map) where each color is linked to a descriptive word, along with a full text prompt. The output is a detailed image that accurately places objects and scenes based on your sketch, enabling fine-tuned composition.

645 stars. No commits in the last 6 months.

Use this if you need to generate images from text descriptions but require precise control over object placement and composition, beyond what a simple text prompt can provide.

Not ideal if you're looking for completely random or unguided image generation, or if you prefer to edit existing images without providing a segmentation map.

digital-art concept-design visual-content-creation illustration image-composition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

645

Forks

49

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 24, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/cloneofsimo/paint-with-words-sd"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.