JawherKl/llm-api-gateway
Scalable API gateway that aggregates calls to multiple LLMs (OpenAI, Hugging Face, Groq, Anthropic, Gemini, etc.), includes caching, rate limiting, logging, monitoring and production-ready deployment.
This project offers a unified way to manage your interactions with various large language models (LLMs) like OpenAI, Anthropic, or Gemini. It takes your requests, directs them to the correct LLM, and returns the AI's response, all while handling behind-the-scenes tasks like caching and usage limits. This is ideal for a machine learning engineer or product manager who needs to integrate and oversee multiple LLM services within an application.
No commits in the last 6 months.
Use this if you are building an application that needs to use multiple different large language models and you want a single, controlled entry point for all your AI interactions.
Not ideal if you only need to use a single LLM provider for a simple, low-volume application, as it adds unnecessary complexity.
Stars
18
Forks
3
Language
Go
License
—
Category
Last pushed
Sep 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/JawherKl/llm-api-gateway"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...