obirler/LLMProxy
LLMProxy is an intelligent large language model backend routing proxy service.
This tool helps developers who are building applications that use large language models (LLMs) to manage their connections to various LLM providers like OpenAI, Google Gemini, or local models. It acts as a central hub, taking requests from your application and intelligently routing them to the best available LLM backend. This ensures your application remains responsive and reliable, even if some providers experience issues or you need to combine outputs from multiple models.
Use this if you are developing an application that relies on LLMs and you want to abstract away the complexities of managing multiple API keys, routing requests, handling errors, and orchestrating different LLM providers or models.
Not ideal if you only use a single, stable LLM provider for a simple application and do not anticipate needing advanced routing, load balancing, or failover capabilities.
Stars
22
Forks
3
Language
C#
License
MIT
Category
Last pushed
Dec 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/obirler/LLMProxy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...