M-e-r-c-u-r-y/pytorch-transformers
Collection of different types of transformers for learning purposes
This project provides practical tutorials on implementing various transformer models, such as Multi-Head Attention, using PyTorch and TorchText. It guides you through the process of building these models, taking raw text data as input and producing trained models capable of processing and understanding language. This resource is ideal for machine learning engineers and researchers looking to learn or refine their understanding of transformer architectures.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who wants to learn how to build and understand transformer models from scratch using PyTorch.
Not ideal if you are looking for a pre-built, production-ready transformer library or a tool that doesn't require coding to implement natural language processing tasks.
Stars
12
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jan 30, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/M-e-r-c-u-r-y/pytorch-transformers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action