GeeeekExplorer/nano-vllm

Nano vLLM

53
/ 100
Established

Nano-vLLM is for machine learning engineers and developers who need to run large language models (LLMs) efficiently on their own infrastructure. It helps by taking a pre-trained LLM and a batch of text prompts, then generating text completions very quickly. This project is ideal for those managing inference for AI applications.

12,189 stars.

Use this if you are a developer looking for a simplified, high-performance way to serve LLM inference offline.

Not ideal if you need a solution for real-time, low-latency online inference or if you prefer a fully managed API service.

LLM inference MLOps backend development model deployment AI infrastructure
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 22 / 25

How are scores calculated?

Stars

12,189

Forks

1,704

Language

Python

License

MIT

Last pushed

Nov 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/GeeeekExplorer/nano-vllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.