lynxai-team/goinfer
Local LLM proxy, DevOps friendly
This tool helps DevOps teams securely expose locally-hosted Large Language Models (LLMs) to external users or applications without complex network configurations like VPNs or port forwarding. It takes your GGUF-formatted LLM files running on a local GPU machine and provides a secure, encrypted HTTPS API endpoint that can be accessed from anywhere. This is ideal for organizations or individuals who want to leverage their powerful local GPUs for LLM inference while maintaining data privacy and simplified access.
Use this if you need to provide secure, remote access to your organization's or personal GPU-accelerated LLMs without exposing your internal network or relying on public cloud inference services.
Not ideal if you don't have local GPUs, don't need remote access to your LLMs, or are comfortable with existing, simpler solutions like direct local network access.
Stars
9
Forks
2
Language
Go
License
MIT
Category
Last pushed
Feb 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lynxai-team/goinfer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...