liangyuwang/train-large-model-from-scratch

A minimal, hackable pre-training stack for GPT-style language models

29
/ 100
Experimental

This project offers a foundational toolkit for researchers and machine learning engineers to train large language models (LLMs) from the ground up. It takes raw text data as input and produces a trained GPT-style language model, ready for fine-tuning or deployment. Users can customize the model architecture and optimize training across multiple GPUs or machines.

Use this if you need a flexible and performant starting point for building your own large-scale generative AI models with full control over the pre-training process and distributed training capabilities.

Not ideal if you're looking for an out-of-the-box, pre-trained model for immediate use or if you only need to fine-tune an existing smaller model.

large-language-models generative-AI model-pretraining distributed-ML transformer-architectures
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Apache-2.0

Last pushed

Feb 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/liangyuwang/train-large-model-from-scratch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.