camail-official/LinearAttentionPruning

This is the official repository for the pre-print "The Key to State Reduction in Linear Attention: A Rank-based Perspective"

18
/ 100
Experimental

This tool helps machine learning engineers and researchers make their large language models (specifically DeltaNet and Gated DeltaNet architectures) more efficient. It takes an existing linear attention model, reduces its internal complexity (Q/K dimensions), and outputs a smaller, faster model. The goal is to achieve similar performance with significantly less computational cost.

Use this if you need to deploy large language models with linear attention layers more efficiently, reducing their memory footprint and increasing inference speed while minimizing performance degradation.

Not ideal if you are working with transformer architectures that do not use linear attention, or if your primary goal is to improve model accuracy rather than efficiency.

large-language-models model-optimization deep-learning-deployment AI-efficiency
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 3 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Last pushed

Feb 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/camail-official/LinearAttentionPruning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.