thu-nics/MixDQ

[ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization

26
/ 100
Experimental

This project helps anyone working with Text-to-Image AI models like Stable Diffusion to generate images more efficiently. You provide a text prompt, and the model quickly produces an image, using significantly less memory and computing power. This is ideal for researchers, AI artists, or developers who need to run these advanced models on less powerful hardware or speed up their creative workflows.

No commits in the last 6 months.

Use this if you want to generate high-quality images from text prompts using diffusion models, but need to reduce the memory footprint and increase the speed of the generation process, especially on consumer-grade GPUs.

Not ideal if you are looking for a completely new text-to-image model or if you require absolute peak visual fidelity without any concern for computational efficiency.

AI-art-generation text-to-image-synthesis machine-learning-optimization diffusion-models generative-AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

49

Forks

5

Language

Python

License

Last pushed

Nov 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/thu-nics/MixDQ"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.