SkyworkAI/MoE-plus-plus

[ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts

36
/ 100
Emerging

MoE++ is a tool for developers working with large language models (LLMs) to enhance performance and reduce computational overhead. It takes existing Mixture-of-Experts (MoE) model architectures and provides an optimized approach, resulting in faster and more efficient LLMs. This is ideal for machine learning engineers and researchers building or deploying advanced AI models.

264 stars. No commits in the last 6 months.

Use this if you are developing large language models and want to achieve better performance with lower computational resource requirements compared to traditional Mixture-of-Experts models.

Not ideal if you are an end-user looking for a pre-built application, as this project focuses on optimizing the underlying model architecture for developers.

large-language-models model-optimization machine-learning-engineering deep-learning-research computational-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

264

Forks

13

Language

Python

License

Apache-2.0

Last pushed

Oct 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SkyworkAI/MoE-plus-plus"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.