AniAggarwal/ecad
[ICLR 2026] Code for Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion Model
This project helps AI researchers and practitioners accelerate image generation using existing diffusion models. It takes an off-the-shelf diffusion model and, through an evolutionary algorithm, discovers optimal caching schedules to significantly speed up the image creation process. The output is a set of optimized caching patterns that make your diffusion model run faster without needing to retrain it, which is ideal for anyone working with generative AI for visual content.
Use this if you are a researcher or practitioner using diffusion models (like PixArt-α or FLUX) and want to generate high-quality images much faster without changing the model's architecture or retraining it.
Not ideal if you are looking to build a diffusion model from scratch, modify its core architecture, or if you don't use diffusion models for image generation.
Stars
31
Forks
3
Language
Python
License
—
Category
Last pushed
Mar 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/AniAggarwal/ecad"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
FlorianFuerrutter/genQC
Generative Quantum Circuits
horseee/DeepCache
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
Gen-Verse/MMaDA
MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion,...
kuleshov-group/mdlm
[NeurIPS 2024] Simple and Effective Masked Diffusion Language Model
Shark-NLP/DiffuSeq
[ICLR'23] DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models