ChaitanyaK77/Building-a-Small-Language-Model-SLM-

This Repository provides a Jupyter Notebook for building a small language model from scratch using 'TinyStories' dataset. Covers data preprocessing, BPE tokenization, binary storage, GPU memory management, and training a Transformer in PyTorch. Generate sample stories to test your model. Ideal for learning NLP and PyTorch.

41
/ 100
Emerging

This project provides a step-by-step guide in a Jupyter Notebook for building a small language model. You'll learn how to take raw text data, process it into a format a machine can understand, train a neural network, and then generate new, short stories similar to the input. This is designed for AI/ML practitioners, researchers, or students looking to understand how language models work from the ground up.

No commits in the last 6 months.

Use this if you are an AI/ML enthusiast or student who wants to learn the fundamental components and training process of a language model using standard hardware.

Not ideal if you need a production-ready large language model or a tool for advanced natural language processing tasks without wanting to build the underlying model yourself.

natural-language-processing machine-learning-education neural-networks text-generation pytorch-development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 17 / 25

How are scores calculated?

Stars

32

Forks

11

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ChaitanyaK77/Building-a-Small-Language-Model-SLM-"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.