TJ-Solergibert/transformers-in-supercomputers

Transformers training in a supercomputer with the 🤗 Stack and Slurm

14
/ 100
Experimental

This project helps machine learning engineers efficiently train large language models, specifically Transformer-based architectures, on powerful supercomputing clusters. It provides practical examples and scripts for distributing model training across multiple GPUs and nodes. The input is a Transformer model and a dataset, and the output is a more quickly trained model, with insights into optimal training configurations for speed.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher working with Transformer models and need to significantly reduce training times by utilizing multi-GPU or multi-node supercomputing environments managed by Slurm.

Not ideal if you are looking to train models on a single GPU or standard cloud instances, or if your primary concern is model accuracy rather than training efficiency and distributed performance.

large-language-models distributed-ml high-performance-computing ml-infrastructure deep-learning-optimization
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

Last pushed

May 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TJ-Solergibert/transformers-in-supercomputers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.