mosaicml/composer
Supercharge Your Model Training
This is a deep learning training framework designed to help machine learning engineers and researchers train neural networks efficiently at scale. It takes your PyTorch-based model and dataset as input, and outputs a trained model much faster, even when using large clusters of GPUs. This tool is for those who are developing and experimenting with modern deep learning models like LLMs or diffusion models.
5,472 stars. Available on PyPI.
Use this if you are training large-scale deep learning models on clusters of GPUs and want to simplify distributed training, optimize performance, and iterate faster on experiments.
Not ideal if you are working with small models that train quickly on a single GPU or if you prefer to manage all low-level training complexities yourself.
Stars
5,472
Forks
463
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 12, 2025
Commits (30d)
0
Dependencies
16
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mosaicml/composer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
keras-team/keras
Deep Learning for humans
Lightning-AI/torchmetrics
Machine learning metrics for distributed, scalable PyTorch applications.
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
lanpa/tensorboardX
tensorboard for pytorch (and chainer, mxnet, numpy, ...)