adysec/OllamaR

Ollama负载均衡服务器 | 一款高性能、易配置的开源负载均衡服务器,优化Ollama负载。它能够帮助您提高应用程序的可用性和响应速度,同时确保系统资源的有效利用。

57
/ 100
Established

This server helps manage and distribute requests to multiple Ollama AI models, ensuring your applications remain responsive and available even under heavy usage. It takes in user requests for AI model interactions (like chatting or embedding) and routes them efficiently to the best available Ollama model. This is for developers and system administrators who build or maintain applications that rely on Ollama models and need robust, scalable AI infrastructure.

185 stars.

Use this if you are running multiple Ollama instances and need a way to efficiently distribute incoming requests, improve application uptime, and protect your core Ollama servers.

Not ideal if you are a single user running Ollama locally and do not need to manage multiple instances or expose your models to external applications.

AI infrastructure load balancing system administration application scaling distributed systems
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

185

Forks

160

Language

License

GPL-3.0

Last pushed

Nov 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/adysec/OllamaR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.