Tony-Y/pytorch_warmup
Learning Rate Warmup in PyTorch
This is a PyTorch extension that helps machine learning engineers and researchers stabilize the initial training of deep learning models. It takes your existing PyTorch optimizer and learning rate scheduler, applies various 'warmup' schedules, and outputs a more stable and potentially faster model training process. The primary users are deep learning practitioners who build and train neural networks.
415 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are experiencing training instability or slow convergence at the beginning of your deep learning model training in PyTorch and want to apply different learning rate warmup strategies.
Not ideal if you are not using PyTorch or if you are looking for a solution to issues beyond initial training stability.
Stars
415
Forks
23
Language
Python
License
MIT
Category
Last pushed
Jun 19, 2025
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Tony-Y/pytorch_warmup"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)