allenai/staged-training
Staged Training for Transformer Language Models
This project helps machine learning researchers and engineers efficiently train large Transformer language models. It takes an existing partially trained model and intelligently expands its size (depth or width) to continue training a larger model without starting from scratch. This allows practitioners to achieve better model performance with less computational expense.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer developing large language models and want to save significant computational resources by gradually scaling your models.
Not ideal if you are not working with Transformer language models or if you prefer to train models from scratch without incremental growth.
Stars
33
Forks
2
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Mar 31, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/allenai/staged-training"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action