xie-lab-ml/Weak-to-Strong-Diffusion-with-Reflection

[ICLR2026] The official code of "Weak-to-Strong Diffusion with Reflection".

34
/ 100
Emerging

This project helps image generation artists and designers enhance the quality and fidelity of images produced by diffusion models. It takes existing 'weak' and 'strong' image generation models (like DreamShaper vs. SD1.5 or a specific LoRA vs. a standard model) as input. It then generates higher-quality images that align better with human preferences, personalized styles, or specific control conditions, solving common issues like inaccurate object placement, color, or counting.

Use this if you are a digital artist or designer looking to produce more visually appealing and accurate images from your existing diffusion models by leveraging the strengths and weaknesses of different model pairings.

Not ideal if you are looking for a completely new image generation model from scratch, as this tool refines output from existing models rather than creating new ones.

AI-art digital-design image-generation creative-workflows diffusion-models
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

56

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xie-lab-ml/Weak-to-Strong-Diffusion-with-Reflection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.