SkyworkAI/MoE-plus-plus
[ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
MoE++ is a tool for developers working with large language models (LLMs) to enhance performance and reduce computational overhead. It takes existing Mixture-of-Experts (MoE) model architectures and provides an optimized approach, resulting in faster and more efficient LLMs. This is ideal for machine learning engineers and researchers building or deploying advanced AI models.
264 stars. No commits in the last 6 months.
Use this if you are developing large language models and want to achieve better performance with lower computational resource requirements compared to traditional Mixture-of-Experts models.
Not ideal if you are an end-user looking for a pre-built application, as this project focuses on optimizing the underlying model architecture for developers.
Stars
264
Forks
13
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SkyworkAI/MoE-plus-plus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EfficientMoE/MoE-Infinity
PyTorch library for cost-effective, fast and easy serving of MoE models.
raymin0223/mixture_of_recursions
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation...
AviSoori1x/makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej...
thu-nics/MoA
[CoLM'25] The official implementation of the paper
jaisidhsingh/pytorch-mixtures
One-stop solutions for Mixture of Expert modules in PyTorch.