sungnyun/diffblender
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
This project helps graphic designers, digital artists, and content creators generate images from complex descriptions. You provide text prompts along with other types of inputs, and it synthesizes custom images that combine these elements. It's for anyone needing to create detailed visual content based on specific, multi-faceted concepts.
No commits in the last 6 months.
Use this if you need to create unique images by blending multiple descriptive elements, like text descriptions, styles, or other visual conditions.
Not ideal if you're looking for a simple text-to-image generator without the need for advanced conditional control or multimodal inputs.
Stars
46
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 21, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sungnyun/diffblender"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...