lordmathis/llamactl

Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.

47
/ 100
Emerging

This tool helps AI engineers and MLOps professionals efficiently manage and deploy multiple open-source large language models (LLMs) like Llama, MLX, and vLLM. It allows you to download models, serve them through a unified API compatible with OpenAI and Anthropic, and route requests to different instances, all controlled via an intuitive web dashboard. You get a central place to manage diverse models, monitor their health, and handle distributed deployments.

Use this if you need a centralized system to manage, route requests, and monitor multiple open-source LLMs across various backends and potentially different machines.

Not ideal if you only ever run a single LLM instance or are looking for a platform that handles model training and fine-tuning.

AI-inference-management LLM-deployment model-serving MLOps API-routing
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 13 / 25

How are scores calculated?

Stars

89

Forks

11

Language

Go

License

MIT

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lordmathis/llamactl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.