EfficientMoE/MoE-Infinity

PyTorch library for cost-effective, fast and easy serving of MoE models.

50
/ 100
Established

This tool helps machine learning engineers and researchers serve large Mixture-of-Experts (MoE) models, like those used for chatbots and language translation, more efficiently. It takes HuggingFace-compatible MoE models as input and outputs generated text with significantly reduced latency and memory requirements, even on less powerful GPUs. The ideal user is someone managing the deployment of large language models.

288 stars.

Use this if you need to run large Mixture-of-Experts models on GPUs with limited memory and want to achieve faster inference times compared to other serving solutions.

Not ideal if you require distributed inference across multiple machines, as this open-source version currently focuses on single or multi-GPU inference on a single node.

Large Language Models MLOps Model Serving AI Inference Deep Learning Deployment
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

288

Forks

25

Language

Python

License

Apache-2.0

Last pushed

Mar 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/EfficientMoE/MoE-Infinity"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.