tech-srl/layer_norm_expressivity_role

Code for the paper "On the Expressivity Role of LayerNorm in Transformers' Attention" (Findings of ACL'2023)

23
/ 100
Experimental

This project helps machine learning researchers and academics understand how Layer Normalization impacts the performance of Transformer models, particularly in their attention mechanisms. It takes experimental setups for tasks like 'Majority' and 'Unselectable Keys' and outputs results that demonstrate the expressivity role of Layer Normalization. This is for researchers specializing in deep learning architecture and natural language processing.

No commits in the last 6 months.

Use this if you are a machine learning researcher investigating the fundamental properties and architectural choices within Transformer networks.

Not ideal if you are looking for an off-the-shelf solution for an applied NLP task or a general-purpose Transformer library.

Machine Learning Research Transformer Models Neural Network Architecture Natural Language Processing Deep Learning Theory
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

57

Forks

3

Language

Python

License

Last pushed

Sep 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tech-srl/layer_norm_expressivity_role"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.