bwconrad/soft-moe
PyTorch implementation of "From Sparse to Soft Mixtures of Experts"
This project helps machine learning researchers and practitioners who are building or experimenting with large neural networks, especially for computer vision tasks. It provides a way to incorporate 'Soft Mixture of Experts' (Soft-MoE) layers into PyTorch-based Vision Transformers, potentially improving model efficiency and performance. You input an image and get predictions, similar to a standard image classification model.
No commits in the last 6 months. Available on PyPI.
Use this if you are a machine learning researcher or engineer working with PyTorch and Vision Transformers, and you want to implement or explore advanced Mixture of Experts architectures for improved model scaling and performance.
Not ideal if you are looking for an out-of-the-box solution for image classification without needing to delve into model architecture modifications or PyTorch code.
Stars
68
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 22, 2023
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/bwconrad/soft-moe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AdaptiveMotorControlLab/CEBRA
Learnable latent embeddings for joint behavioral and neural analysis - Official implementation of CEBRA
theolepage/sslsv
Toolkit for training and evaluating Self-Supervised Learning (SSL) frameworks for Speaker...
PaddlePaddle/PASSL
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision...
YGZWQZD/LAMDA-SSL
30 Semi-Supervised Learning Algorithms
ModSSC/ModSSC
ModSSC: A Modular Framework for Semi Supervised Classification