mosaicml/composer

Supercharge Your Model Training

60
/ 100
Established

This is a deep learning training framework designed to help machine learning engineers and researchers train neural networks efficiently at scale. It takes your PyTorch-based model and dataset as input, and outputs a trained model much faster, even when using large clusters of GPUs. This tool is for those who are developing and experimenting with modern deep learning models like LLMs or diffusion models.

5,472 stars. Available on PyPI.

Use this if you are training large-scale deep learning models on clusters of GPUs and want to simplify distributed training, optimize performance, and iterate faster on experiments.

Not ideal if you are working with small models that train quickly on a single GPU or if you prefer to manage all low-level training complexities yourself.

deep-learning-engineering large-language-models neural-network-training machine-learning-research distributed-training
Maintenance 6 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

5,472

Forks

463

Language

Python

License

Apache-2.0

Last pushed

Nov 12, 2025

Commits (30d)

0

Dependencies

16

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mosaicml/composer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.