nekomeowww/ollama-operator

🚒 Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! 🐫

52
/ 100
Established

This project helps operations engineers and MLOps teams easily deploy and manage multiple large language models (LLMs) on a Kubernetes cluster. You provide the name of an Ollama-compatible model, and the operator handles fetching, loading, and running it as a service. It's designed for those who need to scale their LLM inference capabilities beyond a single machine, integrating seamlessly into existing Kubernetes infrastructure.

234 stars.

Use this if you are an operations engineer or MLOps specialist managing a Kubernetes environment and need to deploy and scale various large language models for different applications or teams.

Not ideal if you only need to run LLMs locally on a single machine or are not working with Kubernetes infrastructure.

MLOps Kubernetes deployment LLM inference model serving cloud infrastructure
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

234

Forks

26

Language

Go

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nekomeowww/ollama-operator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.