JawherKl/llm-api-gateway

Scalable API gateway that aggregates calls to multiple LLMs (OpenAI, Hugging Face, Groq, Anthropic, Gemini, etc.), includes caching, rate limiting, logging, monitoring and production-ready deployment.

27
/ 100
Experimental

This project offers a unified way to manage your interactions with various large language models (LLMs) like OpenAI, Anthropic, or Gemini. It takes your requests, directs them to the correct LLM, and returns the AI's response, all while handling behind-the-scenes tasks like caching and usage limits. This is ideal for a machine learning engineer or product manager who needs to integrate and oversee multiple LLM services within an application.

No commits in the last 6 months.

Use this if you are building an application that needs to use multiple different large language models and you want a single, controlled entry point for all your AI interactions.

Not ideal if you only need to use a single LLM provider for a simple, low-volume application, as it adds unnecessary complexity.

AI application development LLM integration API management AI infrastructure Machine learning operations
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 12 / 25

How are scores calculated?

Stars

18

Forks

3

Language

Go

License

Category

llm-api-gateways

Last pushed

Sep 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/JawherKl/llm-api-gateway"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.