vllm and xllm

These two tools are competitors, as both aim to provide high-performance inference and serving engines for LLMs, differing in their specific optimization strategies and adoption.

vllm
87
Verified
xllm
69
Established
Maintenance 22/25
Adoption 15/25
Maturity 25/25
Community 25/25
Maintenance 22/25
Adoption 10/25
Maturity 15/25
Community 22/25
Stars: 73,007
Forks: 14,312
Downloads:
Commits (30d): 912
Language: Python
License: Apache-2.0
Stars: 1,081
Forks: 149
Downloads:
Commits (30d): 123
Language: C++
License:
No risk flags
No Package No Dependents

About vllm

vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

This project helps machine learning engineers and developers efficiently deploy and serve large language models (LLMs) in production environments. You provide your chosen LLM and receive a high-throughput, memory-optimized inference service ready for use. It's designed for ML engineers, MLOps specialists, and developers who need to integrate LLM capabilities into applications without sacrificing speed or cost efficiency.

LLM deployment model serving AI infrastructure MLOps API development

About xllm

jd-opensource/xllm

A high-performance inference engine for LLMs, optimized for diverse AI accelerators.

This project helps businesses and organizations deploy large language models (LLMs) like DeepSeek-V3.1 or Qwen2/3, especially on Chinese AI accelerators. It takes these pre-trained models and makes them run much faster and more cost-effectively, generating text responses for applications like intelligent customer service, risk control, or ad recommendations. The end-users are AI solution architects, MLOps engineers, and IT infrastructure managers responsible for deploying and managing AI applications.

AI-application-deployment large-language-model-inference AI-infrastructure-optimization enterprise-AI-solutions AI-acceleration-hardware

Scores updated daily from GitHub, PyPI, and npm data. How scores work