RunLLM/aqueduct
Aqueduct is no longer being maintained. Aqueduct allows you to run LLM and ML workloads on any cloud infrastructure.
Managing and deploying large language models (LLMs) and machine learning (ML) models across various cloud environments can be complex. This tool helps machine learning engineers and data scientists streamline their ML operations, allowing them to define ML tasks in standard Python code and then run them seamlessly on existing cloud infrastructure like Kubernetes, Spark, or AWS Lambda. It provides a unified way to deploy models and monitor their performance and data across different cloud services.
520 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or data scientist who needs a unified framework to define, deploy, and monitor your LLM and ML workflows across diverse cloud infrastructure without rewriting code for each environment.
Not ideal if you are looking for a fully managed, hosted MLOps platform, or if your ML workloads are not run on cloud infrastructure.
Stars
520
Forks
21
Language
Go
License
Apache-2.0
Category
Last pushed
Jun 07, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/RunLLM/aqueduct"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.