tamohannes/urartu

Build ML pipelines with smart caching and remote execution. Develop locally, deploy to HPC clusters instantly. Track with Aim. 🎯

45
/ 100
Emerging

This tool helps machine learning engineers build and manage their machine learning pipelines. It takes individual ML steps (like data preprocessing, model training, or evaluation) and combines them into an automated workflow. The output is a structured, reproducible ML pipeline that can be easily run on different computing environments, from local machines to high-performance computing clusters.

Use this if you need to build robust, reproducible machine learning workflows with automatic caching and experiment tracking, especially when deploying models to HPC environments.

Not ideal if you are looking for a no-code or low-code solution for simple data processing tasks without complex ML model training.

machine-learning-engineering ml-ops hpc-deployment experiment-management ml-workflow-automation
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

13

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Feb 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tamohannes/urartu"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.