lliai/D2MoE

D^2-MoE: Delta Decompression for MoE-based LLMs Compression

37
/ 100
Emerging

This project helps machine learning engineers and researchers reduce the computational resources needed for large language models (LLMs) that use a Mixture-of-Experts (MoE) architecture. It takes an existing MoE LLM and outputs a compressed version that uses fewer parameters, making it faster and more memory-efficient to run, without needing to retrain the model. It's designed for those deploying or experimenting with large AI models.

No commits in the last 6 months.

Use this if you need to deploy or experiment with large MoE-based language models but are constrained by computational resources or want to improve inference speed.

Not ideal if you are looking to train a new LLM from scratch or if your models do not use the Mixture-of-Experts architecture.

large-language-models model-compression deep-learning-deployment artificial-intelligence resource-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

74

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Mar 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lliai/D2MoE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.