joshr17/IFM
Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.
When training machine learning models on image data, especially in vision or medical imaging, this method helps ensure that the model learns a wide range of important features rather than relying on simple 'shortcuts'. You provide your image dataset, and it helps the model produce more robust and accurate classifications or predictions. This is for machine learning researchers and practitioners who build and evaluate computer vision or medical image analysis models.
No commits in the last 6 months.
Use this if you are training contrastive learning models and want to improve their generalization by preventing them from suppressing crucial features during representation learning.
Not ideal if you are not working with contrastive learning or if your primary concern is not feature suppression in image-based machine learning tasks.
Stars
47
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 29, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/joshr17/IFM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AdaptiveMotorControlLab/CEBRA
Learnable latent embeddings for joint behavioral and neural analysis - Official implementation of CEBRA
theolepage/sslsv
Toolkit for training and evaluating Self-Supervised Learning (SSL) frameworks for Speaker...
PaddlePaddle/PASSL
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision...
YGZWQZD/LAMDA-SSL
30 Semi-Supervised Learning Algorithms
ModSSC/ModSSC
ModSSC: A Modular Framework for Semi Supervised Classification