matrixhub-ai/matrixhub

An Open-source, self-hosted AI model hub with Hugging Face compatibility, accelerating vLLM/SGLang performance.

50
/ 100
Established

MatrixHub provides a private, secure hub for managing and distributing AI models within your enterprise. It takes AI models, like those from Hugging Face, and delivers them rapidly and securely to your GPU clusters, even in air-gapped environments. This is ideal for MLOps engineers, AI infrastructure teams, or IT professionals managing large-scale AI deployments.

Use this if you need to serve large AI models quickly and securely across many GPU nodes in an enterprise environment, especially when dealing with compliance or air-gapped networks.

Not ideal if you are a single user or small team without complex security needs, looking to experiment with AI models locally without large-scale deployment requirements.

MLOps AI-infrastructure Enterprise-AI Model-distribution Secure-ML
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 13 / 25
Community 19 / 25

How are scores calculated?

Stars

58

Forks

17

Language

Go

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/matrixhub-ai/matrixhub"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.