mmrl/disent-and-gen

Code from the article: "The Role of Disentanglement in Generalisation" (ICLR, 2021).

22
/ 100
Experimental

This project helps machine learning researchers systematically evaluate how well different variational autoencoder (VAE) models can generalize to new combinations of features they haven't seen during training. It takes in common image datasets like dSprites or 3DShapes and outputs metrics on how well VAEs can reconstruct or compose these unseen combinations. Machine learning researchers, especially those working on representation learning and generalization, would find this useful.

No commits in the last 6 months.

Use this if you are a machine learning researcher studying how disentangled representations impact a model's ability to generalize to novel combinations of features.

Not ideal if you are a practitioner looking for a ready-to-use model for a specific computer vision task or if you are not familiar with deep learning research concepts.

machine-learning-research representation-learning deep-learning generalization variational-autoencoders
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

21

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

May 28, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mmrl/disent-and-gen"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.