doublewordai/control-layer
The world’s fastest AI model gateway (450x less overhead than LiteLLM). Unified access to LLMs across endpoints (openAI, self-hosted, etc.) behind a single authentication layer - with API key generation, user management, request logging, and more
This tool helps developers and IT operations teams manage and secure access to multiple AI models from a single point. It acts as a high-performance intermediary, taking requests for AI model inferences and routing them to the correct service, whether it's a proprietary API or a self-hosted open-source model. The primary users are MLOps engineers, backend developers, and system administrators who need to integrate AI models into production applications while maintaining control and security.
Use this if you need a single, fast, and secure gateway to control access to various AI models (like OpenAI, self-hosted, etc.) across different users and applications within your organization.
Not ideal if you are an individual user running AI models locally or don't require centralized management, authentication, or performance optimization for multiple models in a production environment.
Stars
53
Forks
7
Language
Rust
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/doublewordai/control-layer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...