microsoft/sarathi-serve
A low-latency & high-throughput serving engine for LLMs
This is a serving engine for large language models (LLMs) that helps deploy and run them with very fast response times and high capacity. It takes an LLM as input and provides a way for applications to send prompts and receive responses quickly, even when many users are interacting with the model simultaneously. This tool is for infrastructure engineers or MLOps specialists responsible for deploying and managing LLM services.
482 stars.
Use this if you need to serve a large language model to many users or applications and require both extremely low latency for individual requests and high throughput for overall traffic.
Not ideal if you are looking for a comprehensive, feature-rich LLM serving solution with extensive out-of-the-box integrations, as this is a research prototype focused on core performance.
Stars
482
Forks
62
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/sarathi-serve"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...