thu-nics/ViDiT-Q

[ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation

36
/ 100
Emerging

This project helps AI practitioners generate high-quality images and videos using advanced AI models like OpenSORA and Pixart-Sigma, but with significantly reduced computational demands. It takes a trained AI model for image/video generation and produces a more efficient version that uses less memory and runs faster on GPUs, without sacrificing visual quality. This is ideal for AI researchers, content creators, or developers who deploy large generative AI models.

153 stars. No commits in the last 6 months.

Use this if you need to run large image or video generation models more efficiently on hardware with limited memory or processing power, while maintaining high visual fidelity.

Not ideal if you are developing new foundational generative AI models from scratch, as this tool focuses on optimizing existing ones.

AI model deployment generative AI video generation image generation AI model optimization
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

153

Forks

25

Language

Python

License

Last pushed

Mar 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/thu-nics/ViDiT-Q"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.