showlab/BoxDiff
[ICCV 2023] BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion
This project helps graphic designers, content creators, and marketers generate images from text descriptions with precise control over object placement. You input a text prompt describing the scene and specify bounding boxes for key elements. The output is a high-quality image where objects appear exactly where you want them, overcoming a common limitation of general text-to-image tools.
275 stars. No commits in the last 6 months.
Use this if you need to generate images from text and require specific control over the location and size of individual objects within the generated image, ensuring precise visual compositions.
Not ideal if you're only looking for general image generation from text prompts without needing fine-grained spatial control over elements.
Stars
275
Forks
18
Language
Python
License
—
Category
Last pushed
Nov 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/showlab/BoxDiff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...