huanngzh/Parts2Whole
[TIP 2025] From Parts to Whole: A Unified Reference Framework for Controllable Human Image Generation
This project helps designers, marketers, and artists create unique human portraits by combining specific body parts and styles from various reference images. You provide separate images for things like a face, a particular outfit, or a pose, along with a text description, and it generates a new, integrated human image. This is ideal for anyone needing to generate customized human images for campaigns, virtual try-ons, or creative content without complex photo editing.
196 stars. No commits in the last 6 months.
Use this if you need to generate high-quality, customized human images by assembling different visual elements like faces, clothing, and poses from multiple sources, controlled by text prompts.
Not ideal if you need to generate images with highly specific artistic styles or if your primary subjects are not human, as the current model's generalization across diverse styles and non-human subjects is limited.
Stars
196
Forks
9
Language
Python
License
MIT
Category
Last pushed
Sep 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/huanngzh/Parts2Whole"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...