m-a-n-i-f-e-s-t/power-attention
Attention Kernels for Symmetric Power Transformers
This project offers specialized attention mechanisms designed to improve the performance of transformer models in deep learning. It processes input data through symmetric power transformers and outputs enhanced attention kernels. This is primarily for machine learning engineers and researchers who are developing or optimizing transformer-based AI systems.
129 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher specifically working on transformer architectures and need advanced attention kernels.
Not ideal if you are not deeply involved in deep learning model development or are looking for a high-level API for general AI applications.
Stars
129
Forks
8
Language
—
License
—
Category
Last pushed
Sep 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/m-a-n-i-f-e-s-t/power-attention"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models