RunLLM/aqueduct

Aqueduct is no longer being maintained. Aqueduct allows you to run LLM and ML workloads on any cloud infrastructure.

37
/ 100
Emerging

Managing and deploying large language models (LLMs) and machine learning (ML) models across various cloud environments can be complex. This tool helps machine learning engineers and data scientists streamline their ML operations, allowing them to define ML tasks in standard Python code and then run them seamlessly on existing cloud infrastructure like Kubernetes, Spark, or AWS Lambda. It provides a unified way to deploy models and monitor their performance and data across different cloud services.

520 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or data scientist who needs a unified framework to define, deploy, and monitor your LLM and ML workflows across diverse cloud infrastructure without rewriting code for each environment.

Not ideal if you are looking for a fully managed, hosted MLOps platform, or if your ML workloads are not run on cloud infrastructure.

MLOps LLM deployment cloud ML workflow machine learning engineering data science operations
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

520

Forks

21

Language

Go

License

Apache-2.0

Category

llm-api-gateways

Last pushed

Jun 07, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/RunLLM/aqueduct"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.