awslabs/awsome-distributed-training

Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.

61
/ 100
Established

This project provides pre-built configurations and example test cases to help you efficiently train large machine learning models using various AWS services like SageMaker HyperPod, AWS ParallelCluster, AWS Batch, and Amazon EKS. It offers reference architectures to set up the necessary cloud infrastructure and includes training scripts for popular frameworks like PyTorch and Megatron-LM. Machine learning engineers and researchers who need to scale their model training across many machines on AWS would use this to get started quickly.

402 stars.

Use this if you need to train large machine learning models on AWS and want ready-to-use infrastructure templates and training examples to save setup time.

Not ideal if you are developing small models that don't require distributed training or if you are not using AWS for your machine learning infrastructure.

Machine Learning Engineering Large Model Training Cloud Computing Distributed Systems AI Research
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

402

Forks

176

Language

Shell

License

MIT-0

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/awslabs/awsome-distributed-training"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.