liangyuwang/train-large-model-from-scratch
A minimal, hackable pre-training stack for GPT-style language models
This project offers a foundational toolkit for researchers and machine learning engineers to train large language models (LLMs) from the ground up. It takes raw text data as input and produces a trained GPT-style language model, ready for fine-tuning or deployment. Users can customize the model architecture and optimize training across multiple GPUs or machines.
Use this if you need a flexible and performant starting point for building your own large-scale generative AI models with full control over the pre-training process and distributed training capabilities.
Not ideal if you're looking for an out-of-the-box, pre-trained model for immediate use or if you only need to fine-tune an existing smaller model.
Stars
7
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/liangyuwang/train-large-model-from-scratch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Lightning-AI/litgpt
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
liangyuwang/Tiny-DeepSpeed
Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library
catherinesyeh/attention-viz
Visualizing query-key interactions in language + vision transformers (VIS 2023)
microsoft/Text2Grad
🚀 Text2Grad: Converting natural language feedback into gradient signals for precise model...
huangjia2019/llm-gpt
From classic NLP to modern LLMs: building language models step by step. 异æ¥å›¾ä¹¦ï¼šã€Š GPT图解 å¤§æ¨¡åž‹æ˜¯æ€Žæ ·æž„å»ºçš„ã€‹-...