zelaki/eqvae
[ICML'25] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling.
This project helps machine learning researchers and practitioners create higher-quality generative image models faster. It takes an existing image generation model, like a Stable Diffusion Variational Autoencoder (SD-VAE), and applies a special regularization that makes the internal representation of images (latent space) more organized. This results in an improved generative model that can produce better images more efficiently.
174 stars.
Use this if you are a researcher or developer working with generative image models and want to improve their performance and training efficiency, especially concerning image rotations and scaling.
Not ideal if you are a general user looking for an out-of-the-box image generation tool without diving into model training or optimization.
Stars
174
Forks
7
Language
Python
License
—
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/zelaki/eqvae"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational...
chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"