jxhe/vae-lagging-encoder

PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders" (ICLR 2019)

46
/ 100
Emerging

This project helps researchers and practitioners working with Variational Autoencoders (VAEs) to build more effective generative models. It takes text or image datasets and produces a VAE model that is less prone to 'posterior collapse,' a common issue where the model fails to learn meaningful latent representations. This allows for better text generation, image synthesis, and a clearer understanding of the data's underlying structure. The primary users are machine learning researchers and data scientists focused on generative modeling.

186 stars. No commits in the last 6 months.

Use this if you are training Variational Autoencoders for text or image data and are encountering issues with posterior collapse, leading to poor quality generations or uninformative latent spaces.

Not ideal if you are looking for an off-the-shelf application to directly generate content without understanding VAEs, or if you are working with data types other than text and images.

generative-modeling unsupervised-learning text-generation image-synthesis latent-space-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

186

Forks

33

Language

Python

License

MIT

Last pushed

Dec 15, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/jxhe/vae-lagging-encoder"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.