fkodom/soft-mixture-of-experts
PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)
This project helps machine learning researchers and practitioners working with image classification to build highly efficient Vision Transformers. It takes an image as input and outputs a classification prediction or feature embeddings, allowing for more performant models, especially when working with large datasets. The primary users are machine learning engineers and AI researchers focused on computer vision tasks.
No commits in the last 6 months.
Use this if you need to build extremely large and efficient Vision Transformer models for image classification or feature extraction, and you are comfortable with PyTorch.
Not ideal if you are looking for a plug-and-play solution without any coding, or if your primary focus is on natural language processing models rather than computer vision.
Stars
82
Forks
6
Language
Python
License
MIT
Category
Last pushed
Oct 05, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fkodom/soft-mixture-of-experts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AdaptiveMotorControlLab/CEBRA
Learnable latent embeddings for joint behavioral and neural analysis - Official implementation of CEBRA
theolepage/sslsv
Toolkit for training and evaluating Self-Supervised Learning (SSL) frameworks for Speaker...
PaddlePaddle/PASSL
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision...
YGZWQZD/LAMDA-SSL
30 Semi-Supervised Learning Algorithms
ModSSC/ModSSC
ModSSC: A Modular Framework for Semi Supervised Classification