mirzamdhammad6266/llm-microservice-hub
Production-grade LLM microservice built with FastAPI and AsyncIO, including chat, embeddings, async tasks, and rate-limited model calls.
Stars
1
Forks
—
Language
Python
License
—
Category
Last pushed
Dec 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mirzamdhammad6266/llm-microservice-hub"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
adysec/OllamaR
Ollama负载均衡服务器 | 一款高性能、易配置的开源负载均衡服务器,优化Ollama负载。它能够帮助您提高应用程序的可用性和响应速度,同时确保系统资源的有效利用。
majiayu000/litellm-rs
A high-performance AI Gateway written in Rust — call 100+ LLM APIs using OpenAI format
intelligentnode/IntelliNode
Access the latest AI models like ChatGPT, LLaMA, Deepseek, Diffusion, Hugging face, and beyond...
wpydcr/LLM-Kit
🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI...
henomis/lingoose
🪿 LinGoose is a Go framework for building awesome AI/LLM applications.