ubiomni/mrouter

One Endpoint, Every Model. Route, Monitor, and Failover — All From Your Terminal.

44
/ 100
Emerging

This tool acts as a central hub for managing your access to various AI models like GPT-4, Claude, and Gemini. You input your applications' requests to a single local address, and it intelligently routes them to the correct AI provider. This is ideal for developers managing applications that rely on multiple AI services, especially when deployed on remote servers.

104 stars.

Use this if you are a developer running server applications that need to dynamically switch between or fall over to different AI model providers without reconfiguring each application individually.

Not ideal if you only use one AI model provider for all your applications and don't require centralized management or failover capabilities.

AI-application-development cloud-infrastructure API-management LLM-ops backend-development
No Package No Dependents
Maintenance 13 / 25
Adoption 9 / 25
Maturity 11 / 25
Community 11 / 25

How are scores calculated?

Stars

104

Forks

9

Language

Rust

License

MIT

Last pushed

Mar 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/ubiomni/mrouter"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.