nachiket273/One_Cycle_Policy

Pytorch notebook with One Cycle Policy implementation (https://arxiv.org/abs/1803.09820)

34
/ 100
Emerging

This helps deep learning practitioners efficiently train neural networks by intelligently adjusting key training parameters. It takes your neural network and training data as input, and outputs an optimized training schedule that can lead to faster convergence and better model performance. Data scientists, machine learning engineers, and AI researchers who develop and train deep learning models will find this useful.

No commits in the last 6 months.

Use this if you are training a neural network and want to optimize learning rates and other hyperparameters to achieve better model performance more quickly.

Not ideal if you are not working with deep learning models or do not have a need to fine-tune training parameters.

deep-learning neural-networks model-training hyperparameter-tuning machine-learning-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

73

Forks

13

Language

Jupyter Notebook

License

Last pushed

Jun 25, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nachiket273/One_Cycle_Policy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.