andreimerfu/pllm
High-performance LLM Gateway built in Go - OpenAI compatible proxy with multi-provider support, adaptive routing, and enterprise features
This gateway helps organizations confidently use large language models (LLMs) from various providers like OpenAI, Anthropic, or Google, ensuring their AI applications remain reliable and cost-effective. It takes your existing application's LLM requests and intelligently directs them to the best available provider based on performance, cost, or reliability, allowing you to avoid vendor lock-in and outages. This is for technical decision-makers and architects managing enterprise AI infrastructure, rather than individual developers.
Use this if you need to build highly resilient, cost-optimized, and performant AI applications that can switch between different LLM providers without changing your core application code.
Not ideal if you are a single developer experimenting with one LLM provider and do not require advanced traffic management, failover, or multi-provider support.
Stars
14
Forks
2
Language
Go
License
MIT
Category
Last pushed
Feb 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/andreimerfu/pllm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...