run-ai/genv
GPU environment and cluster management with LLM support
This tool helps data scientists and ML engineers efficiently manage and share GPU resources across multiple projects or within a team. You can easily allocate specific GPUs and memory for your experiments, ensuring everyone has the resources they need. It takes your raw GPU capacity and allows you to create isolated, configurable environments for different tasks, making it simpler to collaborate and reproduce work.
658 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a data scientist or ML engineer who shares GPU machines or clusters with teammates or across different machine learning projects and need better control over resource allocation.
Not ideal if you are working on a single machine with dedicated GPU access and no need to share or manage environments.
Stars
658
Forks
42
Language
Python
License
AGPL-3.0
Category
Last pushed
May 16, 2024
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/run-ai/genv"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
XShengTech/MEGREZ
🌈 MEGREZ | 🍒 Make Extendable GPU Resource EASY
benz0li/mojo-dev-container
Multi-arch (linux/amd64, linux/arm64/v8) Mojo dev container
gabe565/docker-obico
Pre-built Docker images for Obico server
alan-turing-institute/AI-workflows
A collections of portable, real-world AI workflows for testing and benchmarking
danghoangnhan/code-server-astraluv
Minimal GPU-enabled Kubeflow notebook with Astral UV, VS Code Server, and SSH access — CUDA...