run-ai/genv

GPU environment and cluster management with LLM support

49
/ 100
Emerging

This tool helps data scientists and ML engineers efficiently manage and share GPU resources across multiple projects or within a team. You can easily allocate specific GPUs and memory for your experiments, ensuring everyone has the resources they need. It takes your raw GPU capacity and allows you to create isolated, configurable environments for different tasks, making it simpler to collaborate and reproduce work.

658 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are a data scientist or ML engineer who shares GPU machines or clusters with teammates or across different machine learning projects and need better control over resource allocation.

Not ideal if you are working on a single machine with dedicated GPU access and no need to share or manage environments.

GPU-management ML-resource-allocation data-science-workflow LLM-deployment ML-experiment-reproducibility
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 14 / 25

How are scores calculated?

Stars

658

Forks

42

Language

Python

License

AGPL-3.0

Last pushed

May 16, 2024

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/run-ai/genv"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.