zxuu/Self-Attention

Transformer的完整实现。详细构建Encoder、Decoder、Self-attention。以实际例子进行展示,有完整的输入、训练、预测过程。可用于学习理解self-attention和Transformer

36
/ 100
Emerging

This project offers a hands-on implementation of the Transformer neural network architecture, specifically focusing on its self-attention mechanism. It takes paired sequences of text, such as a sentence in one language and its translation, and processes them to demonstrate how Transformers can learn to generate one sequence from another. This is for machine learning engineers or researchers who want to understand the inner workings of Transformers and self-attention.

127 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking for a complete, working example to learn and understand the Transformer architecture and its self-attention components in detail.

Not ideal if you are a practitioner looking for a pre-trained model or a high-level API to directly apply Transformers to your own data without needing to understand the underlying implementation.

natural-language-processing machine-translation deep-learning neural-networks sequence-modeling
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

127

Forks

22

Language

Python

License

Last pushed

Apr 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/zxuu/Self-Attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.