chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
This project helps machine learning practitioners explore the capabilities of Variational Autoencoders (VAEs) for image generation. You input a dataset of images like MNIST or CIFAR10, and it produces a trained VAE model that can generate new, similar images and visualize the underlying 'latent space' that organizes image features. This tool is ideal for researchers or students learning about generative models and latent space representation.
122 stars. No commits in the last 6 months.
Use this if you are a machine learning student or researcher looking to understand and experiment with Variational Autoencoders for image generation and latent space exploration.
Not ideal if you need a production-ready image generation system or a VAE implementation for non-image data.
Stars
122
Forks
24
Language
Python
License
MIT
Category
Last pushed
Oct 22, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/chaitanya100100/VAE-for-Image-Generation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
zelaki/eqvae
[ICML'25] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling.