cncf/llm-starter-pack
🤖 Get started with LLMs on your kind cluster, today!
This project helps developers quickly set up and experiment with large language models (LLMs) within a Kubernetes environment on their local machine. It takes your existing Docker and Kubernetes tools as input, and outputs a running LLM chatbot demo accessible in your browser. This is designed for developers who want to test LLMs in a cloud-native setting without complex infrastructure setup.
172 stars.
Use this if you are a developer looking to rapidly deploy and interact with an LLM in a local Kubernetes cluster.
Not ideal if you are a non-developer seeking an off-the-shelf LLM application or a production-ready deployment.
Stars
172
Forks
23
Language
Python
License
—
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/cncf/llm-starter-pack"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
openvinotoolkit/model_server
A scalable inference server for models optimized with OpenVINOâ„¢
madroidmaq/mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically...
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based...
generative-computing/mellea
Mellea is a library for writing generative programs.
rhesis-ai/rhesis
Open-source platform & SDK for testing LLM and agentic apps. Define expected behavior, generate...