chengzeyi/ParaAttention

https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching

44
/ 100
Emerging

This project helps AI practitioners generate images and videos much faster from text or image prompts using large diffusion models like FLUX and HunyuanVideo. By intelligently parallelizing calculations and reusing past computations, it takes your prompts and delivers the final high-quality visual outputs significantly quicker. It is designed for researchers, artists, or developers working with large-scale generative AI models.

425 stars. No commits in the last 6 months.

Use this if you need to speed up the generation of high-quality images or videos using large diffusion transformer (DiT) models, especially across multiple GPUs or with dynamic caching.

Not ideal if you are working with smaller models that don't benefit from parallel processing or if your primary concern is absolute pixel-level fidelity without any acceptable trade-off for speed.

generative-ai image-generation video-generation diffusion-models ai-inference
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

425

Forks

45

Language

Python

License

Last pushed

Jul 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/chengzeyi/ParaAttention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.