gazelle93/Transformer-Various-Positional-Encoding

This project aims to implement the Transformer Encoder blocks using various Positional Encoding methods.

23
/ 100
Experimental

This project helps machine learning engineers and researchers implement Transformer Encoder blocks for natural language processing tasks. It takes raw text as input and processes it through various positional encoding methods (Absolute, Shaw et al., Raffel et al.) to produce tokenized results, attention scores, and hidden states. The output is used to build and experiment with advanced language models.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher developing advanced natural language processing models and need to experiment with different positional encoding techniques within Transformer architectures.

Not ideal if you are looking for an out-of-the-box, production-ready NLP application or a high-level library for general text analysis.

Natural Language Processing Deep Learning Research Transformer Models Machine Learning Engineering Text Embeddings
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

24

Forks

2

Language

Python

License

Last pushed

Jun 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gazelle93/Transformer-Various-Positional-Encoding"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.