COO-LLM/coo-llm-main
A high-performance reverse proxy that intelligently distributes requests across multiple LLM providers (OpenAI, Google Gemini, Anthropic Claude) and API keys. It provides seamless OpenAI API compatibility, advanced load balancing algorithms, real-time cost optimization, and enterprise-grade observability.
This tool helps organizations manage and optimize their use of various large language models (LLMs) like OpenAI, Google Gemini, and Anthropic Claude. It acts as a central hub, taking your requests for an LLM and intelligently routing them to different providers or API keys. The result is better performance, lower costs, and more reliable access to the LLMs your business applications rely on, all without changing your existing code.
Use this if you are building applications that use multiple LLM providers or API keys and need to optimize costs, improve reliability, and manage performance centrally.
Not ideal if your application only uses a single LLM provider with a single API key and does not require advanced load balancing or cost optimization features.
Stars
8
Forks
1
Language
Go
License
—
Category
Last pushed
Oct 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/COO-LLM/coo-llm-main"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...