fabian-sp/MoMo
MoMo: Momentum Models for Adaptive Learning Rates
This tool helps machine learning engineers and researchers streamline the training of deep learning models. It automatically adjusts the learning rates for popular optimizers like SGD with momentum and Adam, reducing the need for extensive manual tuning. By simply replacing your existing optimizer with MoMo or MoMo-Adam and providing the loss value, you can achieve efficient model convergence with less hyperparameter searching.
No commits in the last 6 months.
Use this if you are a machine learning practitioner who trains deep learning models and wants to reduce the time and effort spent on manually tuning learning rates for your optimizers.
Not ideal if you are working with a highly specialized or custom optimization algorithm that is not based on SGD with momentum or Adam, or if you prefer full manual control over every aspect of your learning rate schedule.
Stars
20
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jun 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fabian-sp/MoMo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)