yifanzhang-pro/HLA

Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)

36
/ 100
Emerging

This project helps machine learning engineers and researchers overcome the challenge of scaling autoregressive language models to process very long sequences of text or data. It takes the text sequences you want to analyze or generate and processes them more efficiently than traditional methods, resulting in language models that can handle much larger contexts without prohibitive computational costs. This is for professionals building and training large language models.

Use this if you are developing large language models and struggle with the quadratic computational cost of traditional attention mechanisms when dealing with long input sequences.

Not ideal if you are looking for an off-the-shelf solution for natural language processing tasks rather than a component for building custom models.

large-language-models natural-language-processing deep-learning-optimization sequence-modeling machine-learning-research
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 13 / 25
Community 9 / 25

How are scores calculated?

Stars

45

Forks

4

Language

HTML

License

CC-BY-4.0

Last pushed

Jan 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yifanzhang-pro/HLA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.