COO-LLM/coo-llm-main

A high-performance reverse proxy that intelligently distributes requests across multiple LLM providers (OpenAI, Google Gemini, Anthropic Claude) and API keys. It provides seamless OpenAI API compatibility, advanced load balancing algorithms, real-time cost optimization, and enterprise-grade observability.

33
/ 100
Emerging

This tool helps organizations manage and optimize their use of various large language models (LLMs) like OpenAI, Google Gemini, and Anthropic Claude. It acts as a central hub, taking your requests for an LLM and intelligently routing them to different providers or API keys. The result is better performance, lower costs, and more reliable access to the LLMs your business applications rely on, all without changing your existing code.

Use this if you are building applications that use multiple LLM providers or API keys and need to optimize costs, improve reliability, and manage performance centrally.

Not ideal if your application only uses a single LLM provider with a single API key and does not require advanced load balancing or cost optimization features.

LLM operations API management cost optimization AI infrastructure enterprise AI
No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Go

License

Category

llm-api-gateways

Last pushed

Oct 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/COO-LLM/coo-llm-main"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.