jmaczan/tiny-vllm

High performance LLM inference engine, a younger sibling of vLLM

26
/ 100
Experimental

This project helps you understand and build a high-performance server to run large language models (LLMs) efficiently on NVIDIA GPUs. It takes a pre-trained LLM file (like Llama 3.2) and produces a responsive engine capable of generating text quickly for multiple users. This is ideal for a machine learning engineer, systems programmer, or researcher focused on deploying and optimizing LLMs.

Use this if you want to learn the intricacies of LLM inference from scratch, understand how to implement low-level optimizations with C++ and CUDA, or build a custom, high-speed LLM serving solution.

Not ideal if you are looking to train your own LLM, design new model architectures, or simply use an existing off-the-shelf inference solution without understanding its internal workings.

LLM deployment GPU optimization inference engineering high-performance computing CUDA development
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

C++

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jmaczan/tiny-vllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.