modelscope/dash-infer
DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including CUDA, x86 and ARMv9.
This project helps operations engineers and IT managers run large language models (LLMs) and multimodal models (MMLMs) more efficiently on various hardware. It takes popular models like Qwen or LLaMA and optimizes their performance, delivering faster responses and lower operational costs. The target audience includes those responsible for deploying and managing AI applications at scale.
273 stars. No commits in the last 6 months.
Use this if you need to deploy large language models or multimodal models in a production environment and optimize their performance for speed and cost-efficiency on different hardware like GPUs or CPUs.
Not ideal if you are looking for a tool to train models or if your primary concern is developing new model architectures rather than deploying existing ones.
Stars
273
Forks
28
Language
C
License
Apache-2.0
Category
Last pushed
Aug 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/modelscope/dash-infer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...