nachiket273/One_Cycle_Policy
Pytorch notebook with One Cycle Policy implementation (https://arxiv.org/abs/1803.09820)
This helps deep learning practitioners efficiently train neural networks by intelligently adjusting key training parameters. It takes your neural network and training data as input, and outputs an optimized training schedule that can lead to faster convergence and better model performance. Data scientists, machine learning engineers, and AI researchers who develop and train deep learning models will find this useful.
No commits in the last 6 months.
Use this if you are training a neural network and want to optimize learning rates and other hyperparameters to achieve better model performance more quickly.
Not ideal if you are not working with deep learning models or do not have a need to fine-tune training parameters.
Stars
73
Forks
13
Language
Jupyter Notebook
License
—
Category
Last pushed
Jun 25, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nachiket273/One_Cycle_Policy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mrdbourke/pytorch-deep-learning
Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
xl0/lovely-tensors
Tensors, for human consumption
stared/livelossplot
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others
dataflowr/notebooks
code for deep learning courses
dvgodoy/PyTorchStepByStep
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"