xrsrke/pipegoose

Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*

44
/ 100
Emerging

This project helps machine learning engineers efficiently train large-scale transformer models, especially multi-modal Mixture of Experts (MoE) models. It takes existing Hugging Face transformer models and training scripts as input, and outputs significantly faster distributed model training, leveraging techniques like data and tensor parallelism. The primary users are ML engineers and researchers working with state-of-the-art large language models.

No commits in the last 6 months.

Use this if you are pre-training large transformer-based multi-modal Mixture of Experts models and need to scale training across multiple GPUs or machines efficiently.

Not ideal if you are working with small models or non-transformer architectures, or if you do not have access to multiple GPUs for distributed training.

large-language-models distributed-training mixture-of-experts deep-learning-infrastructure transformer-models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

87

Forks

19

Language

Python

License

MIT

Last pushed

Dec 14, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xrsrke/pipegoose"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.