nadavbh12/VQ-VAE

Minimalist implementation of VQ-VAE in Pytorch

47
/ 100
Emerging

This is a minimalist implementation of Vector Quantized Variational Autoencoders (VQ-VAE) and Convolutional Variational Autoencoders (CVAE) for image compression. It takes image datasets like MNIST or CIFAR10 as input and outputs reconstructed, compressed versions of those images. This tool is for machine learning researchers and practitioners who are experimenting with discrete latent representation models for image data.

559 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher interested in understanding and applying VQ-VAE or CVAE architectures for image compression and representation learning on standard datasets like MNIST or CIFAR10.

Not ideal if you are looking for a production-ready image compression tool or a highly optimized, state-of-the-art VQ-VAE implementation for large-scale, real-world image datasets.

image-compression representation-learning variational-autoencoders discrete-latent-variables deep-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

559

Forks

89

Language

Python

License

BSD-3-Clause

Last pushed

Jun 03, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nadavbh12/VQ-VAE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.