envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
This project helps operations engineers and platform architects manage access to Generative AI services. It acts as a unified entry point, taking requests from applications and routing them to various AI providers or self-hosted models. It handles authentication, rate limiting, and optimizes traffic for different LLM inference needs. The output is controlled, secure, and efficient access to AI capabilities for internal applications.
1,428 stars. Actively maintained with 58 commits in the last 30 days.
Use this if you need to standardize and control how your applications interact with multiple Generative AI services, whether they are cloud-based or self-hosted.
Not ideal if you only use a single AI model directly within an application and do not require centralized management or advanced traffic control.
Stars
1,428
Forks
185
Language
Go
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
58
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/envoyproxy/ai-gateway"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...
intentee/paddler
Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative...