AdamG012/moe-paper-models
A sumary of MoE experimental setups across a number of different papers.
This project provides a summary of experimental setups for Mixture of Experts (MoE) models across various research papers. It takes information from academic papers and presents it as organized tables detailing model sizes, expert configurations, and hardware requirements. Machine learning researchers and practitioners working with large language models would use this to quickly compare different MoE implementations.
No commits in the last 6 months.
Use this if you need to quickly understand and compare the architectural details and experimental setups of different Mixture of Experts (MoE) models from academic research.
Not ideal if you are looking for executable code, training scripts, or deep technical explanations of MoE algorithms.
Stars
16
Forks
1
Language
—
License
—
Category
Last pushed
Feb 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AdamG012/moe-paper-models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
galilai-group/stable-pretraining
Reliable, minimal and scalable library for pretraining foundation and world models
CognitiveAISystems/MAPF-GPT
[AAAI-2025] This repository contains MAPF-GPT, a deep learning-based model for solving MAPF...
UKPLab/gpl
Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled...
larslorch/avici
Amortized Inference for Causal Structure Learning, NeurIPS 2022
svdrecbd/mhc-mlx
MLX + Metal implementation of mHC: Manifold-Constrained Hyper-Connections by DeepSeek-AI.