anmolg1997/Multi-LoRA-Serve
Multi-adapter inference gateway — one base model, many LoRA adapters per-request, OpenAI-compatible API, tenant routing, Prometheus metrics, FastAPI + React
22
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
9 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/anmolg1997/Multi-LoRA-Serve"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
robert-mcdermott/ollama-batch-cluster
Large Scale Batch Processing with Ollama
32
kimmmmyy223/llm-batch
🚀 Process JSON data in batches with `llm-batch`, leveraging sequential or parallel modes for...
21
Rohit2sali/vllm-multi-tenant-llm-gateway
This is vllm multi tenant large language model gateway. This system is created to serve lot of...
13