mdhabibi/DeepLearning-VAE

Exploring the depths of generative learning with a $\beta$-Variational Autoencoder ($\beta$-VAE) applied to the MNIST dataset for robust digit reconstruction and latent space analysis.

29
/ 100
Experimental

This project helps machine learning practitioners understand and experiment with Variational Autoencoders (VAEs) for generating new image data. It takes images of handwritten digits as input and outputs reconstructed versions of those digits, along with new, similar-looking digits. Researchers or students in machine learning and computer vision would use this to grasp how generative models learn and create images.

No commits in the last 6 months.

Use this if you are studying generative AI and want to see how VAEs work for image reconstruction and new image generation, especially for simple image datasets like handwritten digits.

Not ideal if you need a production-ready system for complex image generation or anomaly detection in high-resolution images, as this is an exploratory project focused on a foundational dataset.

generative-ai image-synthesis dimensionality-reduction computer-vision machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mdhabibi/DeepLearning-VAE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.