beam-cloud/beta9
Ultrafast serverless GPU inference, sandboxes, and background jobs
This helps AI developers run and scale their artificial intelligence applications by taking their Python code and deploying it as fast, serverless endpoints or background jobs. It manages the underlying infrastructure, allowing developers to focus on their models and application logic. This tool is ideal for machine learning engineers, data scientists, and AI developers building and deploying AI-powered features or services.
1,602 stars.
Use this if you need to deploy AI models or run computationally intensive AI tasks efficiently without managing servers.
Not ideal if your primary need is general-purpose web hosting or traditional application deployment that doesn't involve AI workloads.
Stars
1,602
Forks
140
Language
Go
License
AGPL-3.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/beam-cloud/beta9"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
kubeflow/katib
Automated Machine Learning on Kubernetes
kubeai-project/kubeai
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports...
sgl-project/rbg
A workload for deploying LLM inference services on Kubernetes
ptimizeroracle/ondine
The LLM Dataset Engine — batch process millions of rows with 100+ providers. Multi-row batching...
scitix/arks
Arks is a cloud-native inference framework running on Kubernetes