SauravP97/toy-transformer
A decoder only Transformer implementing masked attention
This project helps you train a small language model on your own text data and then generate new text from it, right on your local machine. You feed in a block of text, and it produces a trained model that can generate coherent continuations based on the patterns it learned from your input. It's designed for anyone interested in experimenting with text generation or understanding how basic language models work, without needing specialized hardware.
Use this if you want to train a small text-generating AI on your own documents and create new content or explore basic AI language models without requiring a powerful GPU.
Not ideal if you need a high-performance, production-ready language model, or if you plan to work with very large datasets or complex, nuanced text generation tasks.
Stars
11
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jan 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SauravP97/toy-transformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in...
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
pbloem/former
Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
kyegomez/SimplifiedTransformers
SimplifiedTransformer simplifies transformer block without affecting training. Skip connections,...