Gunale0926/Grams
Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)
This helps deep learning researchers and practitioners train machine learning models more effectively. It takes your existing model parameters and learning rate settings, and outputs a more optimized training process, potentially leading to better model performance. It's designed for anyone working with neural networks who wants to achieve faster or more stable convergence during model training.
No commits in the last 6 months.
Use this if you are training deep learning models and want an advanced optimization algorithm to improve training efficiency and model performance.
Not ideal if you are not working with deep learning models or do not have a need for custom optimization strategies beyond standard optimizers.
Stars
17
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Gunale0926/Grams"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)