lightseekorg/smg
Shepherd Model Gateway
This helps organizations efficiently manage their large language models (LLMs) by routing user requests to the best available model, whether it's hosted internally or by a cloud provider. It takes incoming chat, completion, or embedding requests and directs them to the appropriate LLM, then returns the model's response. This is designed for operations engineers, IT managers, or AI platform administrators who need to serve many users with various LLMs.
Use this if you need to reliably serve multiple large language models, maximize the usage of your existing GPU resources, and maintain control over your LLM infrastructure and user data.
Not ideal if you are a single user running a small number of local models and don't require advanced routing, high availability, or enterprise-level control.
Stars
89
Forks
18
Language
Rust
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Monthly downloads
52
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lightseekorg/smg"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...