m-a-n-i-f-e-s-t/power-attention

Attention Kernels for Symmetric Power Transformers

30
/ 100
Emerging

This project offers specialized attention mechanisms designed to improve the performance of transformer models in deep learning. It processes input data through symmetric power transformers and outputs enhanced attention kernels. This is primarily for machine learning engineers and researchers who are developing or optimizing transformer-based AI systems.

129 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher specifically working on transformer architectures and need advanced attention kernels.

Not ideal if you are not deeply involved in deep learning model development or are looking for a high-level API for general AI applications.

deep-learning transformer-models attention-mechanisms neural-networks machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

129

Forks

8

Language

License

Last pushed

Sep 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/m-a-n-i-f-e-s-t/power-attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.