xie-lab-ml/Zigzag-Diffusion-Sampling

[ICLR2025] The code of Z-Sampling, proposed in our paper "Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection".

29
/ 100
Experimental

This project offers an advanced method for generating images from text descriptions using Stable Diffusion XL. It takes your text prompt as input and produces higher-quality, more accurate images than standard methods, especially for complex or challenging prompts. Image creators, digital artists, and marketing professionals can use this to get better visual outputs from AI.

No commits in the last 6 months.

Use this if you are using Stable Diffusion XL for text-to-image generation and need to improve the quality, detail, and prompt alignment of your generated images.

Not ideal if you are not working with Stable Diffusion models or primarily need to generate images from scratch without text prompts.

AI-art-generation digital-imaging creative-asset-creation visual-content-production text-to-image
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

99

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xie-lab-ml/Zigzag-Diffusion-Sampling"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.