Luodian/Generalizable-Mixture-of-Experts
GMoE could be the next backbone model for many kinds of generalization task.
This project helps machine learning researchers and practitioners develop models that perform well on new, unseen datasets without needing to retrain or fine-tune extensively. You provide existing image datasets, and it outputs a highly generalizable model that can make accurate predictions even when faced with significant shifts in data distribution. This is ideal for those building robust computer vision systems across diverse environments.
273 stars. No commits in the last 6 months.
Use this if you need to build computer vision models that reliably generalize across different data sources or operational environments without extensive retraining.
Not ideal if you are looking for a pre-trained model to use directly without further machine learning development.
Stars
273
Forks
28
Language
Python
License
MIT
Category
Last pushed
Mar 21, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Luodian/Generalizable-Mixture-of-Experts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AdaptiveMotorControlLab/CEBRA
Learnable latent embeddings for joint behavioral and neural analysis - Official implementation of CEBRA
theolepage/sslsv
Toolkit for training and evaluating Self-Supervised Learning (SSL) frameworks for Speaker...
PaddlePaddle/PASSL
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision...
YGZWQZD/LAMDA-SSL
30 Semi-Supervised Learning Algorithms
ModSSC/ModSSC
ModSSC: A Modular Framework for Semi Supervised Classification