hohonu-vicml/DirectedDiffusion
Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)
This tool helps creators, marketers, or storytellers accurately place multiple objects or characters within AI-generated images. Instead of hoping a text prompt generates your desired scene, you can specify exactly where elements like people, products, or props should appear. This gives you precise visual control, making it easier to create consistent and narrative-driven imagery for projects like storyboards or marketing campaigns.
No commits in the last 6 months.
Use this if you need to generate images from text descriptions but find current AI models struggle with placing specific objects in precise locations or maintaining spatial relationships between elements.
Not ideal if your image generation needs are for single objects or abstract scenes where precise spatial control of multiple elements is not critical.
Stars
81
Forks
5
Language
Python
License
—
Category
Last pushed
Feb 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/hohonu-vicml/DirectedDiffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...