EM-GeekLab/LLMOne
Enterprise-grade LLM automated deployment tool that makes AI servers truly "plug-and-play".
Deploying large language models (LLMs) on your own hardware can be a complex and time-consuming process. LLMOne simplifies this by automating the entire setup, from operating system to model deployment, in just a few clicks. It's designed for businesses or individuals who need to quickly set up private, high-performance LLM inference services on dedicated servers or workstations.
Use this if you need to quickly and reliably deploy enterprise-grade LLM services on your private hardware without extensive technical configuration.
Not ideal if you're looking for a cloud-based LLM solution or primarily need to fine-tune models rather than deploy them for inference.
Stars
87
Forks
3
Language
TypeScript
License
MulanPSL-2.0
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/EM-GeekLab/LLMOne"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AlexsJones/llmfit
Hundreds of models & providers. One command to find what runs on your hardware.
victordibia/llmx
An API for Chat Fine-Tuned Large Language Models (llm)
Chen-zexi/vllm-cli
A command-line interface tool for serving LLM using vLLM.
InftyAI/llmaz
☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!
livehl/aimirror
🚀 200倍速!AI时代的下载神器 | Docker/PyPI/HuggingFace/CRAN 全加速 | 并行分片+智能缓存,让下载飞起来