peva3/SmarterRouter

SmarterRouter: An intelligent LLM gateway and VRAM-aware router for Ollama, llama.cpp, and OpenAI. Features semantic caching, model profiling, and automatic failover for local AI labs.

35
/ 100
Emerging

SmarterRouter acts as an intelligent coordinator for your local AI models and even cloud-based LLMs. It takes your text prompts and automatically sends them to the most suitable AI model you have available, based on its understanding of the task and the models' performance. This is for anyone who uses multiple large language models and wants to ensure the right one is used for the right job, without manual selection or high cloud costs.

Use this if you manage several local AI models (like with Ollama) and want an automated system to pick the best model for each query, optimizing for speed or quality.

Not ideal if you only use a single AI model or primarily rely on a single cloud LLM provider and don't need intelligent routing or local model management.

AI-workflow-optimization local-AI-management LLM-deployment prompt-routing AI-resource-management
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 11 / 25
Community 6 / 25

How are scores calculated?

Stars

63

Forks

3

Language

Python

License

MIT

Last pushed

Mar 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/peva3/SmarterRouter"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.