foundation-model-stack/fms-fsdp
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash attention v2.
This project helps machine learning engineers and researchers efficiently pre-train large language models, like Llama2, using advanced PyTorch features. It takes raw text data (pre-tokenized) and outputs a highly performant foundation model checkpoint. This is specifically for those working with large-scale distributed training on powerful GPU clusters.
282 stars.
Use this if you need to pre-train foundation models from scratch or continue pre-training efficiently on large GPU clusters, aiming for high training throughput.
Not ideal if you need an end-to-end framework that includes data preparation, alignment, or fine-tuning, or if you are not working with large-scale distributed GPU training.
Stars
282
Forks
46
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/foundation-model-stack/fms-fsdp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
fla-org/flash-linear-attention
🚀 Efficient implementations of state-of-the-art linear attention models
thu-ml/SageAttention
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x...
thu-ml/SpargeAttn
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
fla-org/flame
🔥 A minimal training framework for scaling FLA models
NX-AI/mlstm_kernels
Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.