fkodom/soft-mixture-of-experts

PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)

35
/ 100
Emerging

This project helps machine learning researchers and practitioners working with image classification to build highly efficient Vision Transformers. It takes an image as input and outputs a classification prediction or feature embeddings, allowing for more performant models, especially when working with large datasets. The primary users are machine learning engineers and AI researchers focused on computer vision tasks.

No commits in the last 6 months.

Use this if you need to build extremely large and efficient Vision Transformer models for image classification or feature extraction, and you are comfortable with PyTorch.

Not ideal if you are looking for a plug-and-play solution without any coding, or if your primary focus is on natural language processing models rather than computer vision.

image-classification computer-vision deep-learning-research model-optimization AI-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

82

Forks

6

Language

Python

License

MIT

Last pushed

Oct 05, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fkodom/soft-mixture-of-experts"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.