SingleZombie/LLSA
Official implementation of Log-linear Sparse Attention (LLSA).
This project offers an optimized way to process very long sequences of information, especially useful for tasks like generating high-resolution images or analyzing lengthy text without losing important details. It takes raw data, such as pixel information for images or tokens for text, and processes it more efficiently to produce outputs like generated images or complex data analyses. Researchers and engineers working with large-scale generative AI models or deep learning applications would find this beneficial.
Use this if you are working with large Transformer models that struggle with the computational cost of 'attention' when processing very long sequences or high-resolution non-sequential data like images.
Not ideal if your data sequences are short, or if you require causal attention (where predictions only depend on past elements) which is not currently supported.
Stars
62
Forks
2
Language
Python
License
—
Category
Last pushed
Feb 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SingleZombie/LLSA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
jadore801120/attention-is-all-you-need-pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
bhavnicksm/vanilla-transformer-jax
JAX/Flax implimentation of 'Attention Is All You Need' by Vaswani et al....
kyegomez/SparseAttention
Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with...
AbdelStark/attnres
Rust implementation of Attention Residuals from MoonshotAI/Kimi