mitja/llamatunnel

Publish local LLMs and LLM apps on the internet.

37
/ 100
Emerging

Llama Tunnel helps you make your local AI models and applications accessible from anywhere on the internet. You can use it to share your private language models, like those running on Ollama, and their user interfaces, such as OpenWebUI, with others or access them remotely on your devices. This tool is ideal for developers, researchers, or anyone who wants to easily host and distribute their own AI services without a complex setup.

No commits in the last 6 months.

Use this if you need to expose your local large language models and their web interfaces securely to the internet or your local network using a custom domain.

Not ideal if you prefer to use managed cloud services for hosting your LLMs or if you don't have experience with Docker, Cloudflare, or command-line tools.

AI hosting LLM deployment remote access developer tools AI application sharing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

27

Forks

4

Language

Jinja

License

MIT

Last pushed

Aug 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mitja/llamatunnel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.