SemiAnalysisAI/InferenceX
Open Source Continuous Inference Benchmarking Qwen3.5, DeepSeek, GPTOSS - GB200 NVL72 vs MI355X vs B200 vs GB300 NVL72 vs H100 & soon™ TPUv6e/v7/Trainium2/3
This project provides continuous, real-time benchmarks for the performance of large language model (LLM) inference. It takes as input various open-source inference frameworks and hardware configurations, outputting up-to-date metrics on token throughput and efficiency. LLM operators, machine learning engineers, and researchers who manage or deploy large-scale AI models would use this.
655 stars. Actively maintained with 64 commits in the last 30 days.
Use this if you need to continuously track and compare the real-world performance of different LLM inference software stacks and hardware combinations.
Not ideal if you are looking for benchmarks of small-scale AI models or for general-purpose computing tasks outside of LLM inference.
Stars
655
Forks
99
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
64
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/SemiAnalysisAI/InferenceX"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
vllm-project/vllm-ascend
Community maintained hardware plugin for vLLM on Ascend
kvcache-ai/Mooncake
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
uccl-project/uccl
UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache...
sophgo/tpu-mlir
Machine learning compiler based on MLIR for Sophgo TPU.
BBuf/how-to-optim-algorithm-in-cuda
how to optimize some algorithm in cuda.