XShengTech/MEGREZ
🌈 MEGREZ | 🍒 Make Extendable GPU Resource EASY
Need to manage GPU resources across multiple machines for deep learning? This platform helps scientists, researchers, and data scientists easily create isolated container instances for their AI/ML models. It takes your raw computing power and deep learning frameworks, and gives you an organized, monitorable environment where you can run your experiments without interference.
124 stars.
Use this if you need a user-friendly way to provision and manage GPU access for multiple users or projects, ensuring each gets dedicated, monitored resources for their deep learning tasks.
Not ideal if you're looking for a simple, single-machine setup for personal projects without the need for multi-user management or extensive resource isolation.
Stars
124
Forks
9
Language
Go
License
AGPL-3.0
Category
Last pushed
Dec 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/XShengTech/MEGREZ"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-ai/genv
GPU environment and cluster management with LLM support
benz0li/mojo-dev-container
Multi-arch (linux/amd64, linux/arm64/v8) Mojo dev container
gabe565/docker-obico
Pre-built Docker images for Obico server
alan-turing-institute/AI-workflows
A collections of portable, real-world AI workflows for testing and benchmarking
danghoangnhan/code-server-astraluv
Minimal GPU-enabled Kubeflow notebook with Astral UV, VS Code Server, and SSH access — CUDA...