thetechdude124/Adam-Optimization-From-Scratch

📈Implementing the ADAM optimizer from the ground up with PyTorch and comparing its performance on six 3-D objective functions (each progressively more difficult to optimize) against SGD, AdaGrad, and RMSProp.

28
/ 100
Experimental

This project helps machine learning practitioners efficiently optimize complex models, particularly those with many parameters or high-dimensional data. It takes a mathematical description of a model's loss function and its parameters, and outputs optimized parameter values that lead to better model performance. Data scientists, machine learning engineers, and researchers can use this to accelerate the training of deep learning models.

No commits in the last 6 months.

Use this if you are developing or training machine learning models and need a robust, computationally efficient method to find optimal model parameters, especially for large-scale or complex problems.

Not ideal if your primary goal is to understand basic gradient descent for very simple, low-dimensional problems, or if you require an optimizer that is invariant to hyperparameter choices.

machine-learning-training deep-learning-optimization neural-network-tuning model-convergence gradient-descent-methods
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

22

Forks

4

Language

Jupyter Notebook

License

Last pushed

Jul 02, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/thetechdude124/Adam-Optimization-From-Scratch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.