YangLing0818/IterComp
[ICLR 2025] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation
IterComp helps digital artists, designers, and marketers generate high-quality images from text descriptions, especially for complex scenes. You provide a text prompt describing the image you want, and it outputs an image that accurately reflects the described objects, their attributes, and their spatial relationships. This is ideal for anyone who needs to create specific visual content quickly and efficiently without manual graphic design.
204 stars. No commits in the last 6 months.
Use this if you need to generate images from complex text prompts and require precise control over how different elements and attributes are rendered in relation to each other.
Not ideal if you primarily need simple image generation or prefer to heavily fine-tune existing images rather than creating new ones from scratch.
Stars
204
Forks
11
Language
Python
License
MIT
Category
Last pushed
Feb 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/YangLing0818/IterComp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...