clovaai/AdamP

AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)

54
/ 100
Established

This project helps machine learning engineers train deep neural networks more effectively. By modifying common optimizers like Adam and SGD, it addresses issues with how these optimizers interact with normalized, scale-invariant network weights. The input is your existing PyTorch model and training setup, and the output is a more stable and potentially higher-performing trained model. It is intended for machine learning practitioners building and training deep learning models.

415 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are training deep neural networks using momentum-based optimizers (like Adam or SGD) and want to improve model performance and training stability, especially when using normalization techniques.

Not ideal if you are working with machine learning models that do not rely on deep neural networks or if you are not using PyTorch.

deep-learning model-training neural-networks optimization computer-vision
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

415

Forks

54

Language

Python

License

MIT

Last pushed

Jan 13, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/clovaai/AdamP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.