horseee/learning-to-cache
[NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching
This project helps machine learning researchers accelerate the image generation process using Diffusion Transformers. By intelligently caching certain layers, it takes your existing Diffusion Transformer models and produces the same high-quality images much faster, with significantly reduced computational demands. This tool is for researchers and practitioners working with generative AI models for image synthesis.
118 stars. No commits in the last 6 months.
Use this if you are generating images with Diffusion Transformer models and need to drastically reduce the time and computational resources required for image synthesis without compromising quality.
Not ideal if you are not working with Diffusion Transformers or if your primary bottleneck is model training rather than inference.
Stars
118
Forks
3
Language
Python
License
—
Category
Last pushed
Jul 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/horseee/learning-to-cache"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
FlorianFuerrutter/genQC
Generative Quantum Circuits
horseee/DeepCache
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
Gen-Verse/MMaDA
MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion,...
kuleshov-group/mdlm
[NeurIPS 2024] Simple and Effective Masked Diffusion Language Model
Shark-NLP/DiffuSeq
[ICLR'23] DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models