hu-zijing/AsynDM

[ICLR 26] Asynchronous diffusion models allocate individual pixels with varying timestep schedules, yielding improved text-to-image alignment.

28
/ 100
Experimental

This project helps graphic designers, marketers, and content creators generate images from text descriptions with greater accuracy. You provide a text prompt describing an image, and it outputs images that align more faithfully with your original intent, especially for detailed or specific elements. It's for anyone who uses text-to-image AI to create visual content and needs higher precision.

No commits in the last 6 months.

Use this if you find that AI-generated images often miss key details or misinterpret parts of your text prompts, especially for complex scenes or specific object placements.

Not ideal if you are looking for a basic text-to-image generator and do not prioritize extreme precision in image-to-text alignment.

AI-art-generation digital-content-creation marketing-visuals graphic-design text-to-image
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 5 / 25

How are scores calculated?

Stars

18

Forks

1

Language

Python

License

MIT

Last pushed

Oct 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/hu-zijing/AsynDM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.