YannDubs/disentangling-vae

Experiments for understanding disentanglement in VAE latent representations

50
/ 100
Established

This project helps researchers and machine learning practitioners understand how well different Variational Autoencoder (VAE) models can separate distinct characteristics of an input image into independent factors. You input images (e.g., faces, MNIST digits) and it outputs visual representations of these disentangled factors, along with metrics to quantify their separation. This is ideal for those studying representation learning and generative models.

841 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher or practitioner investigating and comparing various disentanglement techniques for VAEs.

Not ideal if you need a plug-and-play generative model for a specific application without deep experimentation into disentanglement.

representation-learning generative-models machine-learning-research model-comparison image-synthesis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

841

Forks

148

Language

Python

License

Last pushed

Feb 02, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/YannDubs/disentangling-vae"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.