clovaai/AdamP
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)
This project helps machine learning engineers train deep neural networks more effectively. By modifying common optimizers like Adam and SGD, it addresses issues with how these optimizers interact with normalized, scale-invariant network weights. The input is your existing PyTorch model and training setup, and the output is a more stable and potentially higher-performing trained model. It is intended for machine learning practitioners building and training deep learning models.
415 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are training deep neural networks using momentum-based optimizers (like Adam or SGD) and want to improve model performance and training stability, especially when using normalization techniques.
Not ideal if you are working with machine learning models that do not rely on deep neural networks or if you are not using PyTorch.
Stars
415
Forks
54
Language
Python
License
MIT
Category
Last pushed
Jan 13, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/clovaai/AdamP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)