microsoft/vidur
A large-scale simulation framework for LLM inference
This project helps operations engineers and AI infrastructure managers understand how different large language models will perform on various hardware setups and under varying user demand. You input model details, hardware configurations, and anticipated user request patterns, and it outputs performance metrics like response times and throughput. This allows you to plan capacity and optimize your LLM deployments.
547 stars. No commits in the last 6 months.
Use this if you need to determine the best hardware and software configurations for deploying Large Language Models without costly physical testing.
Not ideal if you are looking for a tool to train LLMs or to benchmark the accuracy of different models.
Stars
547
Forks
104
Language
Python
License
MIT
Category
Last pushed
Jul 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/vidur"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...