jaisidhsingh/pytorch-mixtures
One-stop solutions for Mixture of Expert modules in PyTorch.
This is a tool for machine learning engineers and researchers who are building or experimenting with large neural networks. It simplifies the process of integrating Mixture of Experts (MoE) layers into custom PyTorch models. You provide your existing neural network architecture and a list of 'expert' sub-networks, and it outputs an enhanced network capable of more efficient and specialized processing.
Available on PyPI.
Use this if you are a machine learning engineer or researcher looking to incorporate Mixture of Expert layers into your custom PyTorch neural networks with minimal effort to improve performance or efficiency.
Not ideal if you are not familiar with PyTorch or the concept of neural network architectures and their implementation, as this is a developer-focused tool.
Stars
27
Forks
1
Language
Python
License
MIT
Category
Last pushed
Feb 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jaisidhsingh/pytorch-mixtures"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EfficientMoE/MoE-Infinity
PyTorch library for cost-effective, fast and easy serving of MoE models.
raymin0223/mixture_of_recursions
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation...
AviSoori1x/makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej...
thu-nics/MoA
[CoLM'25] The official implementation of the paper
CASE-Lab-UMD/Unified-MoE-Compression
The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study...