foundation-model-stack/fms-fsdp

🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.

52
/ 100
Established

This project helps machine learning engineers and researchers efficiently pre-train large language models, like Llama2, using advanced PyTorch features. It takes raw text data (pre-tokenized) and outputs a highly performant foundation model checkpoint. This is specifically for those working with large-scale distributed training on powerful GPU clusters.

282 stars.

Use this if you need to pre-train foundation models from scratch or continue pre-training efficiently on large GPU clusters, aiming for high training throughput.

Not ideal if you need an end-to-end framework that includes data preparation, alignment, or fine-tuning, or if you are not working with large-scale distributed GPU training.

large-language-models distributed-training deep-learning-research gpu-optimization foundation-models
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

282

Forks

46

Language

Python

License

Apache-2.0

Last pushed

Nov 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/foundation-model-stack/fms-fsdp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.