andreimerfu/pllm

High-performance LLM Gateway built in Go - OpenAI compatible proxy with multi-provider support, adaptive routing, and enterprise features

40
/ 100
Emerging

This gateway helps organizations confidently use large language models (LLMs) from various providers like OpenAI, Anthropic, or Google, ensuring their AI applications remain reliable and cost-effective. It takes your existing application's LLM requests and intelligently directs them to the best available provider based on performance, cost, or reliability, allowing you to avoid vendor lock-in and outages. This is for technical decision-makers and architects managing enterprise AI infrastructure, rather than individual developers.

Use this if you need to build highly resilient, cost-optimized, and performant AI applications that can switch between different LLM providers without changing your core application code.

Not ideal if you are a single developer experimenting with one LLM provider and do not require advanced traffic management, failover, or multi-provider support.

AI infrastructure LLM operations cloud architecture vendor management cost optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

14

Forks

2

Language

Go

License

MIT

Category

llm-api-gateways

Last pushed

Feb 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/andreimerfu/pllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.