bytedance/UNO
[ICCV 2025] π₯π₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning
This tool helps creatives and designers generate new images that feature specific subjects in diverse scenarios while maintaining high consistency. You provide one or more images of subjects (like a product, a character, or a logo) and a text description of the desired scene. The output is a brand-new image where your subjects are seamlessly integrated into that scene, looking natural and consistent. It's ideal for content creators, marketers, or artists needing to visualize objects or characters in various contexts.
1,353 stars. No commits in the last 6 months.
Use this if you need to generate high-quality images where specific objects or characters you provide are consistently placed into new, custom-described environments.
Not ideal if you primarily need to generate images from scratch without conditioning on existing subject images or if you require fine-grained control over lighting and camera angles for pre-existing photographic assets.
Stars
1,353
Forks
77
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/bytedance/UNO"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
π₯ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...