InftyAI/llmaz
☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!
This project helps MLOps engineers and platform teams easily set up and manage large language models (LLMs) in a production environment. It takes various LLM backends and model providers as input, creating a scalable inference platform on Kubernetes. The output is a robust, performant service ready to handle user queries or integrate into applications.
293 stars.
Use this if you need a production-ready, scalable, and easy-to-manage platform for deploying and serving large language models on Kubernetes.
Not ideal if you are an individual developer experimenting with LLMs locally or do not use Kubernetes for your infrastructure.
Stars
293
Forks
45
Language
Go
License
Apache-2.0
Category
Last pushed
Jan 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InftyAI/llmaz"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
AlexsJones/llmfit
Hundreds of models & providers. One command to find what runs on your hardware.
victordibia/llmx
An API for Chat Fine-Tuned Large Language Models (llm)
Chen-zexi/vllm-cli
A command-line interface tool for serving LLM using vLLM.
livehl/aimirror
🚀 200倍速!AI时代的下载神器 | Docker/PyPI/HuggingFace/CRAN 全加速 | 并行分片+智能缓存,让下载飞起来
TakatoHonda/sui-lang
粋 (Sui) - A programming language optimized for LLM code generation