thu-nics/ViDiT-Q
[ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
This project helps AI practitioners generate high-quality images and videos using advanced AI models like OpenSORA and Pixart-Sigma, but with significantly reduced computational demands. It takes a trained AI model for image/video generation and produces a more efficient version that uses less memory and runs faster on GPUs, without sacrificing visual quality. This is ideal for AI researchers, content creators, or developers who deploy large generative AI models.
153 stars. No commits in the last 6 months.
Use this if you need to run large image or video generation models more efficiently on hardware with limited memory or processing power, while maintaining high visual fidelity.
Not ideal if you are developing new foundational generative AI models from scratch, as this tool focuses on optimizing existing ones.
Stars
153
Forks
25
Language
Python
License
—
Category
Last pushed
Mar 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/thu-nics/ViDiT-Q"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cswry/SeeSR
[CVPR2024] SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution
JJLibra/SALAD-Pan
🤗 Official implementation for "SALAD-Pan: Sensor-Agnostic Latent Adaptive Diffusion for...
open-mmlab/mmgeneration
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
Janspiry/Image-Super-Resolution-via-Iterative-Refinement
Unofficial implementation of Image Super-Resolution via Iterative Refinement by Pytorch
hanjq17/Spectrum
[CVPR 2026] Adaptive Spectral Feature Forecasting for Diffusion Sampling Acceleration