sehoffmann/dmlcloud

Painless distributed training with torch

49
/ 100
Emerging

This library helps machine learning engineers and researchers scale up their deep learning model training on high-performance computing (HPC) clusters. It takes standard PyTorch training scripts and makes it easy to distribute the workload across multiple GPUs and nodes. The output is a faster training process for complex models, enabling quicker experimentation and deployment.

Available on PyPI.

Use this if you are a machine learning engineer or researcher using PyTorch and need to train large models efficiently across multiple GPUs or machines in an HPC environment like a Slurm cluster.

Not ideal if you are looking for a high-level deep learning framework that abstracts away most of the PyTorch code, or if you only train models on a single GPU.

deep-learning model-training distributed-computing HPC-management ML-research
Maintenance 13 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

BSD-3-Clause

Last pushed

Mar 19, 2026

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sehoffmann/dmlcloud"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.