efeslab/fiddler

[ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration

42
/ 100
Emerging

This project helps you run very large language models, specifically those built with a 'Mixture-of-Experts' (MoE) design, on your local computer, even if your graphics card (GPU) doesn't have enough memory. You provide a prompt, and it efficiently generates responses from models like Mixtral-8x7B without needing multiple expensive GPUs. This is for researchers, developers, or anyone who wants to test or use advanced MoE language models locally.

262 stars. No commits in the last 6 months.

Use this if you need to run powerful, unquantized Mixture-of-Experts (MoE) language models like Mixtral-8x7B on a single local GPU with limited memory, and you need fast response times.

Not ideal if your CPU lacks AVX512 support, or if you need to run models other than Mixtral-8x7B, or if you're looking for support for quantized models right now.

large-language-models local-inference deep-learning-research machine-learning-deployment resource-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

262

Forks

32

Language

Python

License

Apache-2.0

Last pushed

Nov 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/efeslab/fiddler"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.