FareedKhan-dev/train-tiny-llm
Train a 29M parameter GPT from Scratch
This project helps machine learning engineers or researchers build their own small, custom language models (LLMs) from scratch. You provide a large text corpus, and the project outputs a trained LLM capable of understanding and generating human-like text, along with a web interface to interact with it. It's designed for those who want to deeply understand and control the LLM development process, from tokenization to fine-tuning.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who wants to train a small, instruction-following language model tailored to specific data, rather than using a pre-existing large model.
Not ideal if you need a production-ready, highly capable LLM immediately without deep diving into its internal workings, or if you lack the computational resources (GPU, RAM) for training.
Stars
34
Forks
7
Language
Python
License
MIT
Category
Last pushed
Mar 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/FareedKhan-dev/train-tiny-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Lightning-AI/litgpt
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
liangyuwang/Tiny-DeepSpeed
Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library
catherinesyeh/attention-viz
Visualizing query-key interactions in language + vision transformers (VIS 2023)
microsoft/Text2Grad
🚀 Text2Grad: Converting natural language feedback into gradient signals for precise model...
huangjia2019/llm-gpt
From classic NLP to modern LLMs: building language models step by step. 异æ¥å›¾ä¹¦ï¼šã€Š GPT图解 å¤§æ¨¡åž‹æ˜¯æ€Žæ ·æž„å»ºçš„ã€‹-...