miszkur/SelfSupervisedLearning

This repo reproduces results of "Understanding Self-Supervised Learning Dynamics without Contrastive Pairs" paper (https://arxiv.org/pdf/2102.06810.pdf)

21
/ 100
Experimental

This project helps deep learning researchers and practitioners understand and reproduce cutting-edge self-supervised learning methods like BYOL, SimSiam, DirectPred, and DirectCopy. It takes image datasets like CIFAR-10 or STL-10 and outputs trained models that can extract meaningful features without labeled data, along with visualizations of how the model learns. Researchers in computer vision and self-supervised learning would use this to validate research or explore model dynamics.

No commits in the last 6 months.

Use this if you are a deep learning researcher or practitioner interested in reproducing and exploring the dynamics of self-supervised learning models, particularly without contrastive pairs.

Not ideal if you are looking for a plug-and-play solution for general image classification without needing to understand the underlying self-supervised learning mechanisms.

deep-learning-research self-supervised-learning computer-vision model-understanding unsupervised-learning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Jupyter Notebook

License

Last pushed

Apr 30, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/miszkur/SelfSupervisedLearning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.