Gunale0926/Grams

Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)

27
/ 100
Experimental

This helps deep learning researchers and practitioners train machine learning models more effectively. It takes your existing model parameters and learning rate settings, and outputs a more optimized training process, potentially leading to better model performance. It's designed for anyone working with neural networks who wants to achieve faster or more stable convergence during model training.

No commits in the last 6 months.

Use this if you are training deep learning models and want an advanced optimization algorithm to improve training efficiency and model performance.

Not ideal if you are not working with deep learning models or do not have a need for custom optimization strategies beyond standard optimizers.

deep-learning-training neural-network-optimization machine-learning-research model-convergence ai-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

17

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Mar 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Gunale0926/Grams"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.