Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
This helps AI engineers and researchers streamline the process of training and fine-tuning complex deep learning models. It takes your raw data and model logic written in PyTorch, automates the underlying infrastructure for training, and outputs a highly optimized, ready-to-deploy AI model. This is used by anyone building and experimenting with deep learning models, from individual researchers to large-scale AI teams.
30,923 stars. Used by 83 other packages. Actively maintained with 38 commits in the last 30 days. Available on PyPI.
Use this if you are a deep learning practitioner who wants to focus on model development and architecture, without getting bogged down in boilerplate code for managing distributed training, mixed precision, or multi-GPU setups.
Not ideal if you need to build custom inference servers for your models, as you would need to integrate with a separate tool like LitServe for that specific task.
Stars
30,923
Forks
3,683
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2026
Commits (30d)
38
Dependencies
8
Reverse dependents
83
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Lightning-AI/pytorch-lightning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related frameworks
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
keras-team/keras
Deep Learning for humans
Lightning-AI/torchmetrics
Machine learning metrics for distributed, scalable PyTorch applications.
lanpa/tensorboardX
tensorboard for pytorch (and chainer, mxnet, numpy, ...)
rwth-i6/returnn
The RWTH extensible training framework for universal recurrent neural networks