heyvaldemar/ollama-traefik-letsencrypt-docker-compose
Ollama with Let's Encrypt Using Docker Compose
This project helps DevOps engineers and IT professionals quickly set up a local large language model (LLM) server using Ollama, accessible securely over the internet. You input your desired configuration variables in an .env file, and it outputs a running Ollama service with automatic SSL certificates from Let's Encrypt, managed by Traefik, all orchestrated via Docker Compose.
Use this if you need to deploy Ollama with secure, web-accessible endpoints for local LLM development or testing, without manually configuring SSL.
Not ideal if you are not familiar with Docker, Docker Compose, or network configurations, or if you need a production-grade, highly available LLM infrastructure.
Stars
23
Forks
4
Language
Shell
License
—
Category
Last pushed
Feb 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/heyvaldemar/ollama-traefik-letsencrypt-docker-compose"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)