pytorch-vae and pytorch-vq-vae
These are complementary implementations exploring different VAE architectures—the basic VAE provides a foundation for understanding variational inference, while VQ-VAE extends it with vector quantization to learn discrete latent representations, making them useful for different use cases rather than interchangeable alternatives.
About pytorch-vae
ethanluoyc/pytorch-vae
A Variational Autoencoder (VAE) implemented in PyTorch
This is a foundational building block for machine learning engineers and researchers working with deep learning models. It takes in complex data, like images or text, and learns a compressed, meaningful representation of that data. This compressed representation can then be used for generating new, similar data, or for tasks like anomaly detection.
About pytorch-vq-vae
zalandoresearch/pytorch-vq-vae
PyTorch implementation of VQ-VAE by Aäron van den Oord et al.
This is a PyTorch implementation of VQ-VAE. It is a research project for machine learning practitioners interested in working with VQ-VAE models. This helps researchers experiment with vector quantized variational autoencoders.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work