horseee/learning-to-cache

[NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching

23
/ 100
Experimental

This project helps machine learning researchers accelerate the image generation process using Diffusion Transformers. By intelligently caching certain layers, it takes your existing Diffusion Transformer models and produces the same high-quality images much faster, with significantly reduced computational demands. This tool is for researchers and practitioners working with generative AI models for image synthesis.

118 stars. No commits in the last 6 months.

Use this if you are generating images with Diffusion Transformer models and need to drastically reduce the time and computational resources required for image synthesis without compromising quality.

Not ideal if you are not working with Diffusion Transformers or if your primary bottleneck is model training rather than inference.

generative-AI image-synthesis deep-learning-optimization model-acceleration
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

118

Forks

3

Language

Python

License

Last pushed

Jul 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/horseee/learning-to-cache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.