d9d-project/d9d

d9d - d[istribute]d - distributed training framework based on PyTorch that tries to be efficient yet hackable

51
/ 100
Established

This framework helps machine learning researchers and engineers efficiently train very large deep learning models across multiple GPUs or machines. You provide your PyTorch model and data, and it manages the complex setup for distributed training, allowing you to get a trained model faster. It's designed for those who need to experiment with novel training approaches without being limited by rigid, predefined systems.

Available on PyPI.

Use this if you are a deep learning researcher or ML engineer building and training custom large-scale models in PyTorch and need a flexible, performant way to distribute your training across multiple devices.

Not ideal if you need a simple command-line tool for training standard, pre-defined models without much customization, or if you are working with older PyTorch versions or hardware.

deep-learning-research large-model-training neural-network-scaling ML-experimentation distributed-computing-ML
Maintenance 13 / 25
Adoption 5 / 25
Maturity 22 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Mar 18, 2026

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/d9d-project/d9d"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.