nadavbh12/VQ-VAE
Minimalist implementation of VQ-VAE in Pytorch
This is a minimalist implementation of Vector Quantized Variational Autoencoders (VQ-VAE) and Convolutional Variational Autoencoders (CVAE) for image compression. It takes image datasets like MNIST or CIFAR10 as input and outputs reconstructed, compressed versions of those images. This tool is for machine learning researchers and practitioners who are experimenting with discrete latent representation models for image data.
559 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher interested in understanding and applying VQ-VAE or CVAE architectures for image compression and representation learning on standard datasets like MNIST or CIFAR10.
Not ideal if you are looking for a production-ready image compression tool or a highly optimized, state-of-the-art VQ-VAE implementation for large-scale, real-world image datasets.
Stars
559
Forks
89
Language
Python
License
BSD-3-Clause
Last pushed
Jun 03, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nadavbh12/VQ-VAE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Naresh1318/Adversarial_Autoencoder
A wizard's guide to Adversarial Autoencoders
mseitzer/pytorch-fid
Compute FID scores with PyTorch.
acids-ircam/RAVE
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
ratschlab/aestetik
AESTETIK: Convolutional autoencoder for learning spot representations from spatial...
jaanli/variational-autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)