matrixhub-ai/matrixhub
An Open-source, self-hosted AI model hub with Hugging Face compatibility, accelerating vLLM/SGLang performance.
MatrixHub provides a private, secure hub for managing and distributing AI models within your enterprise. It takes AI models, like those from Hugging Face, and delivers them rapidly and securely to your GPU clusters, even in air-gapped environments. This is ideal for MLOps engineers, AI infrastructure teams, or IT professionals managing large-scale AI deployments.
Use this if you need to serve large AI models quickly and securely across many GPU nodes in an enterprise environment, especially when dealing with compliance or air-gapped networks.
Not ideal if you are a single user or small team without complex security needs, looking to experiment with AI models locally without large-scale deployment requirements.
Stars
58
Forks
17
Language
Go
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/matrixhub-ai/matrixhub"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
AlexsJones/llmfit
Hundreds of models & providers. One command to find what runs on your hardware.
victordibia/llmx
An API for Chat Fine-Tuned Large Language Models (llm)
Chen-zexi/vllm-cli
A command-line interface tool for serving LLM using vLLM.
InftyAI/llmaz
☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!
livehl/aimirror
🚀 200倍速!AI时代的下载神器 | Docker/PyPI/HuggingFace/CRAN 全加速 | 并行分片+智能缓存,让下载飞起来