allenai/staged-training

Staged Training for Transformer Language Models

29
/ 100
Experimental

This project helps machine learning researchers and engineers efficiently train large Transformer language models. It takes an existing partially trained model and intelligently expands its size (depth or width) to continue training a larger model without starting from scratch. This allows practitioners to achieve better model performance with less computational expense.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer developing large language models and want to save significant computational resources by gradually scaling your models.

Not ideal if you are not working with Transformer language models or if you prefer to train models from scratch without incremental growth.

natural-language-processing large-language-models deep-learning-training computational-efficiency model-scaling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

33

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Mar 31, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/allenai/staged-training"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.