InftyAI/llmaz

☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!

55
/ 100
Established

This project helps MLOps engineers and platform teams easily set up and manage large language models (LLMs) in a production environment. It takes various LLM backends and model providers as input, creating a scalable inference platform on Kubernetes. The output is a robust, performant service ready to handle user queries or integrate into applications.

293 stars.

Use this if you need a production-ready, scalable, and easy-to-manage platform for deploying and serving large language models on Kubernetes.

Not ideal if you are an individual developer experimenting with LLMs locally or do not use Kubernetes for your infrastructure.

MLOps LLM deployment Kubernetes management AI infrastructure model serving
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

293

Forks

45

Language

Go

License

Apache-2.0

Last pushed

Jan 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InftyAI/llmaz"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.