mmrl/disent-and-gen
Code from the article: "The Role of Disentanglement in Generalisation" (ICLR, 2021).
This project helps machine learning researchers systematically evaluate how well different variational autoencoder (VAE) models can generalize to new combinations of features they haven't seen during training. It takes in common image datasets like dSprites or 3DShapes and outputs metrics on how well VAEs can reconstruct or compose these unseen combinations. Machine learning researchers, especially those working on representation learning and generalization, would find this useful.
No commits in the last 6 months.
Use this if you are a machine learning researcher studying how disentangled representations impact a model's ability to generalize to novel combinations of features.
Not ideal if you are a practitioner looking for a ready-to-use model for a specific computer vision task or if you are not familiar with deep learning research concepts.
Stars
21
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 28, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mmrl/disent-and-gen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Westlake-AI/openmixup
CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark
YU1ut/MixMatch-pytorch
Code for "MixMatch - A Holistic Approach to Semi-Supervised Learning"
kamata1729/QATM_pytorch
Pytorch Implementation of QATM:Quality-Aware Template Matching For Deep Learning
nttcslab/msm-mae
Masked Spectrogram Modeling using Masked Autoencoders for Learning General-purpose Audio Representations
rgeirhos/generalisation-humans-DNNs
Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)