xmarva/transformer-architectures

Teaching transformer-based architectures

23
/ 100
Experimental

This collection of notebooks helps machine learning engineers and researchers understand how Transformer neural networks work by guiding them through building one from scratch. You'll work with raw text data, learn how to prepare it for models, implement the core components of Transformers like attention mechanisms, and then train these models. The output is a deep, practical understanding of advanced natural language processing architectures.

No commits in the last 6 months.

Use this if you are a machine learning practitioner looking to build a foundational understanding of Transformer architectures, from basic components to training complete models like BERT or GPT.

Not ideal if you are looking for a plug-and-play solution or an application-focused tool to solve a specific NLP problem without delving into the underlying model architecture.

natural-language-processing machine-learning-engineering deep-learning-education neural-network-design
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

Jupyter Notebook

License

Last pushed

May 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/xmarva/transformer-architectures"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.