mitja/llamatunnel
Publish local LLMs and LLM apps on the internet.
Llama Tunnel helps you make your local AI models and applications accessible from anywhere on the internet. You can use it to share your private language models, like those running on Ollama, and their user interfaces, such as OpenWebUI, with others or access them remotely on your devices. This tool is ideal for developers, researchers, or anyone who wants to easily host and distribute their own AI services without a complex setup.
No commits in the last 6 months.
Use this if you need to expose your local large language models and their web interfaces securely to the internet or your local network using a custom domain.
Not ideal if you prefer to use managed cloud services for hosting your LLMs or if you don't have experience with Docker, Cloudflare, or command-line tools.
Stars
27
Forks
4
Language
Jinja
License
MIT
Category
Last pushed
Aug 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mitja/llamatunnel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)