fabian-sp/MoMo

MoMo: Momentum Models for Adaptive Learning Rates

27
/ 100
Experimental

This tool helps machine learning engineers and researchers streamline the training of deep learning models. It automatically adjusts the learning rates for popular optimizers like SGD with momentum and Adam, reducing the need for extensive manual tuning. By simply replacing your existing optimizer with MoMo or MoMo-Adam and providing the loss value, you can achieve efficient model convergence with less hyperparameter searching.

No commits in the last 6 months.

Use this if you are a machine learning practitioner who trains deep learning models and wants to reduce the time and effort spent on manually tuning learning rates for your optimizers.

Not ideal if you are working with a highly specialized or custom optimization algorithm that is not based on SGD with momentum or Adam, or if you prefer full manual control over every aspect of your learning rate schedule.

deep-learning-training model-optimization hyperparameter-tuning neural-network-training machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

20

Forks

1

Language

Python

License

MIT

Last pushed

Jun 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fabian-sp/MoMo"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.