LucasBoTang/GradNorm

PyTorch implementation of the GradNorm

35
/ 100
Emerging

When training deep learning models that handle multiple related tasks simultaneously, it's common for some tasks to learn faster or slower than others, leading to an imbalanced model. This tool helps machine learning engineers and researchers automatically adjust the training process so that all tasks learn at a more balanced rate. You input your multi-task neural network, its loss function, and training data, and it outputs optimized task weights and training loss logs, improving overall model performance.

119 stars. No commits in the last 6 months.

Use this if you are developing or training deep learning models that perform multiple tasks at once and are struggling with balancing the training progress across these different tasks.

Not ideal if you are working with single-task deep learning models or do not require dynamic adjustment of task losses during training.

deep-learning-training multi-task-learning neural-network-optimization machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

119

Forks

7

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Sep 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/LucasBoTang/GradNorm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.