jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders" (ICLR 2019)
This project helps researchers and practitioners working with Variational Autoencoders (VAEs) to build more effective generative models. It takes text or image datasets and produces a VAE model that is less prone to 'posterior collapse,' a common issue where the model fails to learn meaningful latent representations. This allows for better text generation, image synthesis, and a clearer understanding of the data's underlying structure. The primary users are machine learning researchers and data scientists focused on generative modeling.
186 stars. No commits in the last 6 months.
Use this if you are training Variational Autoencoders for text or image data and are encountering issues with posterior collapse, leading to poor quality generations or uninformative latent spaces.
Not ideal if you are looking for an off-the-shelf application to directly generate content without understanding VAEs, or if you are working with data types other than text and images.
Stars
186
Forks
33
Language
Python
License
MIT
Category
Last pushed
Dec 15, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/jxhe/vae-lagging-encoder"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
zelaki/eqvae
[ICML'25] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling.