ray-project/kuberay
A toolkit to run Ray applications on Kubernetes
For platform engineers or MLOps teams managing large-scale AI/ML workloads, KubeRay simplifies running distributed Ray applications on Kubernetes. It takes your Ray application code and desired cluster configurations, then provides automated deployment, scaling, and lifecycle management for your Ray clusters. This helps you efficiently execute tasks like large language model inference, batch processing, and model training.
2,370 stars. Actively maintained with 43 commits in the last 30 days.
Use this if you are an infrastructure or platform engineer who needs to run complex, distributed AI/ML applications using Ray on a Kubernetes cluster with robust management features like autoscaling and fault tolerance.
Not ideal if you are a data scientist or ML practitioner who wants to run small-scale Ray applications locally or on a single machine without managing Kubernetes infrastructure.
Stars
2,370
Forks
722
Language
Go
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
43
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/ray-project/kuberay"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
skypilot-org/skypilot
Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage...
dstackai/dstack
dstack is an open-source control plane for running development, training, and inference jobs on...
kubeflow/kale
Kubeflow’s superfood for Data Scientists
volcano-sh/volcano
A Cloud Native Batch System (Project under CNCF)
m3dev/gokart
Gokart solves reproducibility, task dependencies, constraints of good code, and ease of use for...