lightseekorg/smg

Shepherd Model Gateway

55
/ 100
Established

This helps organizations efficiently manage their large language models (LLMs) by routing user requests to the best available model, whether it's hosted internally or by a cloud provider. It takes incoming chat, completion, or embedding requests and directs them to the appropriate LLM, then returns the model's response. This is designed for operations engineers, IT managers, or AI platform administrators who need to serve many users with various LLMs.

Use this if you need to reliably serve multiple large language models, maximize the usage of your existing GPU resources, and maintain control over your LLM infrastructure and user data.

Not ideal if you are a single user running a small number of local models and don't require advanced routing, high availability, or enterprise-level control.

LLM deployment AI infrastructure model serving API management GPU resource optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 13 / 25
Maturity 13 / 25
Community 19 / 25

How are scores calculated?

Stars

89

Forks

18

Language

Rust

License

Apache-2.0

Category

llm-api-gateways

Last pushed

Mar 13, 2026

Monthly downloads

52

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lightseekorg/smg"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.