xie-lab-ml/Weak-to-Strong-Diffusion-with-Reflection
[ICLR2026] The official code of "Weak-to-Strong Diffusion with Reflection".
This project helps image generation artists and designers enhance the quality and fidelity of images produced by diffusion models. It takes existing 'weak' and 'strong' image generation models (like DreamShaper vs. SD1.5 or a specific LoRA vs. a standard model) as input. It then generates higher-quality images that align better with human preferences, personalized styles, or specific control conditions, solving common issues like inaccurate object placement, color, or counting.
Use this if you are a digital artist or designer looking to produce more visually appealing and accurate images from your existing diffusion models by leveraging the strengths and weaknesses of different model pairings.
Not ideal if you are looking for a completely new image generation model from scratch, as this tool refines output from existing models rather than creating new ones.
Stars
56
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jan 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xie-lab-ml/Weak-to-Strong-Diffusion-with-Reflection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.