LLM Inference Serving LLM Tools

Tools and frameworks for deploying, serving, and scaling LLM inference endpoints in production environments. Includes optimization techniques (quantization, batching, caching), serving platforms (vLLM, Ray Serve, BentoML), and infrastructure solutions. Does NOT include client SDKs, application frameworks, or fine-tuning tools.

There are 72 llm inference serving tools tracked. 1 score above 70 (verified tier). The highest-rated is thu-pacman/chitu at 79/100 with 3,418 stars. 1 of the top 10 are actively maintained.

Get all 72 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=llm-inference-serving&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 thu-pacman/chitu

High-performance inference framework for large language models, focusing on...

79
Verified
2 NotPunchnox/rkllama

Ollama alternative for Rockchip NPU: An efficient solution for running AI...

57
Established
3 sophgo/LLM-TPU

Run generative AI models in sophgo BM1684X/BM1688

57
Established
4 Deep-Spark/DeepSparkHub

DeepSparkHub selects hundreds of application algorithms and models, covering...

53
Established
5 howard-hou/VisualRWKV

VisualRWKV is the visual-enhanced version of the RWKV language model,...

49
Emerging
6 bentoml/llm-inference-handbook

Everything you need to know about LLM inference

48
Emerging
7 tomdyson/microllama

The smallest possible LLM API

48
Emerging
8 HuaizhengZhang/AI-Infra-from-Zero-to-Hero

🚀 Awesome System for Machine Learning ⚡️ AI System Papers and Industry...

48
Emerging
9 liguodongiot/llm-resource

LLM全栈优质资源汇总

47
Emerging
10 ucbepic/BARGAIN

Low-Cost LLM-Powered Data Processing with Theoretical Guarantees

47
Emerging
11 eth-sri/lmql

A language for constraint-guided and efficient LLM programming.

46
Emerging
12 0-mostafa-rezaee-0/Batch_LLM_Inference_with_Ray_Data_LLM

Batch LLM Inference with Ray Data LLM: From Simple to Advanced

46
Emerging
13 manuelescobar-dev/LLM-Tools

Open-source calculator for LLM system requirements.

44
Emerging
14 aws-samples/easy-model-deployer

Deploy open-source LLMs on AWS in minutes — with OpenAI-compatible APIs and...

44
Emerging
15 FareedKhan-dev/llm-scale-deploy-guide

An end-to-end pipeline to optimize and host LLM for 100K parallel queries

43
Emerging
16 kungfuai/CVlization

Practical workflows for training and inference on AI models

42
Emerging
17 vicharak-in/Axon-NPU-Guide

This repository contains guide on how to setup toolkits to use NPU present...

41
Emerging
18 Seeed-Projects/reComputer-RK-LLM

This repository utilizes Docker to package large language models and...

41
Emerging
19 Pelochus/ezrknpu

Easy installation and usage of Rockchip's NPUs found in RK3588 and similar SoCs

41
Emerging
20 wangcx18/llm-vscode-inference-server

An endpoint server for efficiently serving quantized open-source LLMs for code.

40
Emerging
21 av1d/rk3588_npu_llm_server

Allows access via HTTP to LLM running on RK3588 NPU. Returns JSON response.

39
Emerging
22 alibaba/ServeGen

A framework for generating realistic LLM serving workloads

38
Emerging
23 AlexKaravaev/world-creator

LLM-based CLI utility for simulation worlds creation.

38
Emerging
24 av1d/NPU-Chat

Web chat front end for rk3588_npu_llm_server / RK3588 LLM chat interface

38
Emerging
25 thekevinscott/vicuna-7b

Vicuna 7B is a large language model that runs in the browser. Exposes...

37
Emerging
26 CHKDSKLabs/l-bom

L-BOM is a small Python CLI that inspects local LLM model artifacts such as...

37
Emerging
27 tpietruszka/rate_limited

Efficient parallel utilization of slow, rate-limited APIs - like those of...

37
Emerging
28 aws-samples/amazon-sagemaker-llama2-response-streaming-recipes

Amazon SageMaker Llama 2 Inference via Response Streaming

36
Emerging
29 jmaczan/torch-webgpu

PyTorch compiler and WebGPU runtime

35
Emerging
30 wudingjian/rkllm_chat

将LLM 模型部署到 Rockchip Rk3588芯片中,在开发板上使用NPU进行推理

35
Emerging
31 serialscriptr/Orange-PI-5-Pro-MLC-LLM

Guide I wrote mostly for myself on how to run mlc-llm on the Orange Pi 5 Pro

35
Emerging
32 SRSWTI/axis

AI eXplainable Inference & Search. Open Sourcing on-premise, ultra-fast...

35
Emerging
33 Zerohertz/PyCon_KR_2025_Tutorial_vLLM

🐍 PyCon Korea 2025 Tutorial: vLLM의 OpenAI-Compatible Server 톺아보기 🐍

34
Emerging
34 plushpluto/kllm

Welcome to KLLM, an advanced project focused on core kernel AI development,...

30
Emerging
35 selimsandal/OneShotNPU

An NPU designed using an LLM with a single prompt

29
Experimental
36 christophe0606/MLHelium

TinyLlama on Cortex-M55 using CMSIS-DSP and Helium vector instructions

28
Experimental
37 cdepillabout/mkAIDerivation

Generate a Nix derivation on the fly using an LLM

28
Experimental
38 godaai/llm-inference

Resources for Large Language Model Inference

27
Experimental
39 Leon6225/InternVL3.5-4B-NPU

🌌 Advance multimodal AI with InternVL3.5-4B for RK3588 NPU, enhancing vision...

27
Experimental
40 yy29/aws-ec2-tips-llm-chat-ai

Tips for setting up AI & Machine Learning R&D Environment and LLM Training &...

26
Experimental
41 Zerohertz/Instruct_KR_2025_Summer_Meetup_vLLM

🎹 Instruct.KR 2025 Summer Meetup: 오픈소스 LLM, vLLM으로 Production까지 🎹

25
Experimental
42 CuzImSlymi/Apertis-LLM

Apertis LLM. Clean. Fast. Built Different. Custom LLM architecture designed...

24
Experimental
43 parawaveio/parawave

One decorator turns any function into a durable parallel runner.

23
Experimental
44 gfhe/LLM

私有化LLM 训练和部署探索

22
Experimental
45 romitjain/awesome-llm-systems

This repository aims to consolidate resources for learning about systems for LLM

22
Experimental
46 daslearning-org/OnLLM

OnLLM is the platform to run LLM or SLM models using OnnxRuntime directly on...

22
Experimental
47 imetallica/nano-ai

Toolkit to train and build Small LLMs in Elixir

22
Experimental
48 Joao1PNM/awesome-llm-training-inference

Explore frameworks, tools, and resources for efficient large language model...

22
Experimental
49 toopac01/InternVL3.5-8B-NPU

🌌 Explore InternVL3.5-8B NPU for advanced multimodal capabilities on RK3588,...

22
Experimental
50 ray-project/ray-serve-arize-observe

Building Real-Time Inference Pipelines with Ray Serve

21
Experimental
51 mddunlap924/LLM-Inference-Serving

This repository demonstrates LLM execution on CPUs using packages like...

21
Experimental
52 ray-project/anyscale-berkeley-ai-hackathon

Ray and Anyscale for UC Berkeley AI Hackathon!

21
Experimental
53 ravijo/pi-llm

Run large language models locally on a Raspberry Pi Zero 2W (512 MB RAM)...

21
Experimental
54 zia1138/rayevolve

Experimental project for LLM guided algorithm design and optimization built on ray

21
Experimental
55 aratan/LLM-CLI

LLM aratan/qwen3.5-uncensored:9b

21
Experimental
56 gbaptista/nano-apps

Tiny applications that can be embedded in Nano Bots—small, AI-powered robots...

20
Experimental
57 cjmcv/ai-infra-notes

Reading notes on the open source code of AI infrastructure (sglang, llm,...

18
Experimental
58 oriolrius/sagemaker-llm-endpoint

Deploy HuggingFace LLMs on AWS SageMaker with vLLM, OpenAI-compatible API...

17
Experimental
59 yutingshih/eai2024-final

Enhancing User Privacy by Local Deployment of LLMs, Final Project of EAI 2024 Fall

16
Experimental
60 CosmonautCode/Tiny-Local-LLM-System

A lightweight, self-contained Python project for running a local large...

16
Experimental
61 Qually5/distributed-training-ops

A collection of scripts and configurations for managing distributed training...

14
Experimental
62 Rustem/ddl-playbook

Distributed Deep Learning Playbook

14
Experimental
63 ParthaPRay/Readability_Ollama_LLM

This repo shows the coding of readability analysis of response from...

13
Experimental
64 ParthaPRay/python_rust_ollama_analysis

This repo shows the coding of how Ollama localized LLMs on raspberry pi 4b...

13
Experimental
65 look4pritam/InferenceServer-LargeLanguageModels

Large Language Models Inference Server

13
Experimental
66 hansen-han/mlx-imessage

Fine tuning local LLMs on your iMessage chats using QLoRA and mlx to use...

13
Experimental
67 sasomoto/Local-Inference-Server

This contains the code I used to setup a local inference server as well as...

11
Experimental
68 anyscale/learn

Self-paced Ray and Anyscale Education.

11
Experimental
69 windson/inferentia-deployments

Deploy Large Models on AWS Inferentia (Inf2) instances.

11
Experimental
70 Clivern/Mandrillus

🔥 Serve LLM Models in Production for Optimal Performance and Cost.

11
Experimental
71 ParthaPRay/peer_to_peer_local_llm_interaction

This repo contains codes on how peer-to-peer communication is established...

11
Experimental
72 ParthaPRay/llm_dynamic_load_unload

This repo contains codes for dynamic load and unload llms on localized device

11
Experimental