fattorib/ZeRO-transformer

Two implementations of ZeRO-1 optimizer sharding in JAX

21
/ 100
Experimental

This helps machine learning engineers train very large transformer models that would otherwise exceed memory limits, even on powerful hardware. You input your model and training configurations, along with your dataset (typically on a GCP bucket). The output is a trained transformer model, which can then be used for various natural language processing tasks. It's designed for ML engineers working with large-scale language models.

No commits in the last 6 months.

Use this if you need to train transformer models with over a billion parameters on hardware like a TPU v3-32 and are encountering out-of-memory errors.

Not ideal if you are working with smaller models that fit within standard GPU memory or if you prefer not to use JAX for your deep learning projects.

large-language-models deep-learning-training natural-language-processing model-optimization distributed-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Python

License

MIT

Last pushed

Jun 11, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fattorib/ZeRO-transformer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.