VITA-Group/LiGO

[ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David Cox, Zhangyang Wang, Yoon Kim

38
/ 100
Emerging

This project helps machine learning engineers and researchers to efficiently train large Transformer models like BERT and RoBERTa. Instead of training from scratch, it takes a smaller, pre-trained language model and intelligently expands its architecture. This process delivers a larger, more powerful model ready for fine-tuning on specific natural language tasks, significantly reducing the computational resources and time typically required for training such models.

No commits in the last 6 months.

Use this if you need to create larger, more capable language models from existing smaller ones without the massive computational cost of training entirely from scratch.

Not ideal if you primarily work with models other than Transformer architectures like BERT or RoBERTa, or if your goal is to train a model from zero without leveraging prior knowledge.

natural-language-processing large-language-model-training computational-efficiency deep-learning-research model-scaling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

92

Forks

11

Language

Python

License

MIT

Last pushed

Feb 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VITA-Group/LiGO"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.