UNITES-Lab/HEXA-MoE

Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"

24
/ 100
Experimental

This project offers an optimized way to run large AI models that use a 'Mixture of Experts' (MoE) architecture. It takes your existing MoE model and processes it more efficiently, especially on systems with different types of computing hardware. The result is faster model execution and lower memory use, benefiting AI researchers and engineers working with large-scale deep learning models.

No commits in the last 6 months.

Use this if you are training or deploying large Mixture-of-Experts (MoE) models and need to reduce memory consumption or speed up computation, especially on systems with a mix of different GPUs or accelerators.

Not ideal if you are working with smaller, non-MoE deep learning models or do not have access to heterogeneous computing environments.

deep-learning-acceleration large-language-models model-optimization distributed-training heterogeneous-computing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

15

Forks

2

Language

Python

License

Last pushed

Mar 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UNITES-Lab/HEXA-MoE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.