star-whale/starwhale
an MLOps/LLMOps platform
Starwhale is a platform designed to help machine learning engineers and data scientists streamline their work with AI models. It helps manage the entire lifecycle of machine learning and large language models, from preparing data and building models to evaluating performance and fine-tuning. You provide your datasets, model code, and preferred computational environment, and Starwhale helps you consistently build, test, and deploy your models.
237 stars. No commits in the last 6 months.
Use this if you need a structured way to manage the development, evaluation, and deployment of your machine learning and large language models, especially across teams or in production environments.
Not ideal if you are only experimenting with small, one-off models on your local machine and don't require systematic versioning, collaboration, or deployment capabilities.
Stars
237
Forks
39
Language
Java
License
Apache-2.0
Category
Last pushed
Dec 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/star-whale/starwhale"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kubeflow/katib
Automated Machine Learning on Kubernetes
kubeai-project/kubeai
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports...
sgl-project/rbg
A workload for deploying LLM inference services on Kubernetes
beam-cloud/beta9
Ultrafast serverless GPU inference, sandboxes, and background jobs
ptimizeroracle/ondine
The LLM Dataset Engine — batch process millions of rows with 100+ providers. Multi-row batching...