feiskyer/ollama-kubernetes

Kubernetes Helm chart to deploy Large Language Models with Ollama

36
/ 100
Emerging

This helps organizations set up and manage their own local large language models (LLMs) on private infrastructure. It takes your specified LLM choices, such as Llama 3 or Phi-3, and deploys them with a user-friendly web interface. IT operations teams or platform engineers who manage internal AI capabilities would use this to provide secure, on-premise AI chat services.

No commits in the last 6 months.

Use this if you need to run specific large language models on your own servers, with full control over data privacy and resource allocation, accessible via a web interface.

Not ideal if you're an individual user looking for a simple desktop application or don't have existing Kubernetes infrastructure to deploy on.

on-premise-AI private-LLM-deployment internal-AI-services data-privacy AI-infrastructure-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

9

Forks

5

Language

Smarty

License

MIT

Last pushed

Feb 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/feiskyer/ollama-kubernetes"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.