xlite-dev/Awesome-DiT-Inference

📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉

51
/ 100
Established

This project is a curated list of research papers and associated code focused on making Diffusion Transformer (DiT) models run more efficiently. It provides practitioners in AI research and development with information on various techniques like optimized sampling, caching, and parallel processing. The goal is to help these users quickly find methods to speed up the generation of images and other media from DiT models, reducing the time and computational resources needed.

526 stars. Actively maintained with 1 commit in the last 30 days.

Use this if you are an AI researcher or machine learning engineer working with Diffusion Transformer models and want to improve their inference speed and resource efficiency.

Not ideal if you are a general user looking for ready-to-use image generation applications, as this resource focuses on technical research for optimizing diffusion models.

AI-research machine-learning-engineering generative-AI diffusion-models computational-efficiency
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

526

Forks

26

Language

Python

License

GPL-3.0

Last pushed

Feb 25, 2026

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xlite-dev/Awesome-DiT-Inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.