tamohannes/urartu
Build ML pipelines with smart caching and remote execution. Develop locally, deploy to HPC clusters instantly. Track with Aim. 🎯
This tool helps machine learning engineers build and manage their machine learning pipelines. It takes individual ML steps (like data preprocessing, model training, or evaluation) and combines them into an automated workflow. The output is a structured, reproducible ML pipeline that can be easily run on different computing environments, from local machines to high-performance computing clusters.
Use this if you need to build robust, reproducible machine learning workflows with automatic caching and experiment tracking, especially when deploying models to HPC environments.
Not ideal if you are looking for a no-code or low-code solution for simple data processing tasks without complex ML model training.
Stars
13
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tamohannes/urartu"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
muna-ai/muna-py
Run AI models anywhere. https://muna.ai/explore
clearml/clearml-pycharm-plugin
ClearML PyCharm Plugin
sql-machine-learning/elasticdl
Kubernetes-native Deep Learning Framework
microsoft/AKSDeploymentTutorial
Tutorial on how to deploy Deep Learning models on GPU enabled Kubernetes cluster
Langhalsdino/Kubernetes-GPU-Guide
This guide should help fellow researchers and hobbyists to easily automate and accelerate there...