adysec/OllamaR
Ollama负载均衡服务器 | 一款高性能、易配置的开源负载均衡服务器,优化Ollama负载。它能够帮助您提高应用程序的可用性和响应速度,同时确保系统资源的有效利用。
This server helps manage and distribute requests to multiple Ollama AI models, ensuring your applications remain responsive and available even under heavy usage. It takes in user requests for AI model interactions (like chatting or embedding) and routes them efficiently to the best available Ollama model. This is for developers and system administrators who build or maintain applications that rely on Ollama models and need robust, scalable AI infrastructure.
185 stars.
Use this if you are running multiple Ollama instances and need a way to efficiently distribute incoming requests, improve application uptime, and protect your core Ollama servers.
Not ideal if you are a single user running Ollama locally and do not need to manage multiple instances or expose your models to external applications.
Stars
185
Forks
160
Language
—
License
GPL-3.0
Category
Last pushed
Nov 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/adysec/OllamaR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
majiayu000/litellm-rs
A high-performance AI Gateway written in Rust — call 100+ LLM APIs using OpenAI format
intelligentnode/IntelliNode
Access the latest AI models like ChatGPT, LLaMA, Deepseek, Diffusion, Hugging face, and beyond...
wpydcr/LLM-Kit
🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI...
henomis/lingoose
🪿 LinGoose is a Go framework for building awesome AI/LLM applications.
llmapi-io/llmapi-server
Self-host llmapi server, make it really easy for accessing LLMs ! :rocket: