tobna/TaylorShift

This repository contains the code for the paper "TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax"

31
/ 100
Emerging

This project offers a way for machine learning engineers and researchers to build more efficient Transformer and Vision Transformer models. It provides a specialized attention mechanism that processes sequence or image data faster, especially with long inputs. You input your existing model architecture and data, and it outputs a more performant model.

Use this if you are a machine learning engineer or researcher working with Transformer or Vision Transformer models and need to reduce computational complexity or improve performance, especially with large datasets or long sequences.

Not ideal if you are not a developer working with PyTorch for deep learning, or if you need a complete, out-of-the-box solution for a specific application.

deep-learning natural-language-processing computer-vision model-optimization neural-network-architecture
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

MIT

Last pushed

Feb 25, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tobna/TaylorShift"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.