skypilot-org/skypilot

Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, or on-prem).

80
/ 100
Verified

This project helps AI developers and infrastructure teams efficiently run, manage, and scale their AI workloads, such as model training or agent development. It takes your existing AI code and resource requirements, then provisions and optimizes the necessary compute resources across various cloud providers or on-premise systems. The output is your AI job running smoothly with optimized cost and resource utilization, accessible through a unified interface.

9,569 stars. Used by 3 other packages. Actively maintained with 138 commits in the last 30 days. Available on PyPI.

Use this if you are an AI developer or infrastructure engineer who needs to run complex AI workloads across different cloud providers, Kubernetes, or on-premise clusters, and you want to reduce costs and simplify resource management.

Not ideal if you only run simple, lightweight AI tasks on a single, fixed environment or if you prefer to manually manage all your cloud infrastructure configurations.

AI-workload-management MLOps GPU-resource-orchestration cloud-cost-optimization distributed-training
Maintenance 22 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

9,569

Forks

987

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

138

Dependencies

49

Reverse dependents

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/skypilot-org/skypilot"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.