wyzjack/AdaM3
[ICDM 2023] Momentum is All You Need for Data-Driven Adaptive Optimization
AdaM3 helps machine learning researchers and practitioners train deep neural networks more effectively. By modifying how adaptive gradient methods like Adam adjust learning rates, it provides a more stable and faster training process. The input is a deep learning model and training data, and the output is a trained model that generalizes better to new, unseen data, overcoming common issues like getting stuck in local minima.
No commits in the last 6 months.
Use this if you are developing or training deep learning models and need an optimizer that offers both rapid training and superior generalization performance.
Not ideal if you are not working with deep learning models or do not have issues with current adaptive optimizers like Adam.
Stars
26
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/wyzjack/AdaM3"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)