cilabuniba/i-dream-my-painting
[WACV 2025] I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting
This tool helps digital artists, designers, and content creators precisely edit images by replacing specific areas with new content based on text descriptions. You input an image and text prompts describing what you want to generate in selected masked regions, and the tool intelligently fills those areas to create a cohesive new image. It's designed for creative professionals who want fine-grained control over image manipulation using AI.
Use this if you need to perform complex image inpainting where you want to replace multiple, distinct objects or regions within an image using precise text instructions for each area.
Not ideal if you're looking for a simple, one-click image editing tool without requiring an understanding of AI model setup or dataset preparation.
Stars
17
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/cilabuniba/i-dream-my-painting"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jolibrain/joliGEN
Generative AI Image and Video Toolset with GANs and Diffusion for Real-World Applications
zhangmozhe/Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
naver-ai/StyleKeeper
Official Pytorch implementation of "StyleKeeper: Prevent Content Leakage using Negative Visual...
un1tz3r0/finetunepixelartdiffusion
Fine tune a pixelart diffusion model with isometric dataset.
lixiaowen-xw/DiffuEraser
DiffuEraser is a diffusion model for video inpainting, which performs great content completeness...