princeton-nlp/dyck-transformer

[ACL 2021] Self-Attention Networks Can Process Bounded Hierarchical Languages

24
/ 100
Experimental

This project helps evaluate how different language models, specifically Transformers and LSTMs, handle complex hierarchical structures found in language. It takes configuration settings for generating synthetic 'Dyck languages' and language model parameters, then outputs metrics showing how well models learn and process these structures. Researchers in natural language processing and computational linguistics would use this to understand model capabilities.

No commits in the last 6 months.

Use this if you are an NLP researcher or computational linguist investigating the inductive biases and processing capabilities of neural language models on structured, hierarchical data.

Not ideal if you are looking for a tool to apply to real-world natural language tasks, as this focuses on synthetic language structures for theoretical research.

natural-language-processing computational-linguistics language-model-evaluation neural-networks syntactic-parsing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

Last pushed

Jun 01, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/princeton-nlp/dyck-transformer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.