fla-org/flash-linear-attention

🚀 Efficient implementations of state-of-the-art linear attention models

76
/ 100
Verified

This project offers highly optimized building blocks for developing next-generation AI models that can process very long sequences of information efficiently. It provides ready-to-use implementations of advanced 'linear attention' and 'state space' model architectures. AI researchers and machine learning engineers can use these components to create more powerful and scalable models for tasks like natural language understanding or time-series prediction.

4,549 stars. Used by 1 other package. Actively maintained with 29 commits in the last 30 days. Available on PyPI.

Use this if you are a machine learning researcher or engineer building large language models or other sequence models and need highly optimized components to process long data sequences more efficiently.

Not ideal if you are looking for a complete, end-user application or a no-code solution for general-purpose AI tasks.

AI-model-development large-language-models sequence-modeling deep-learning-optimization AI-research
Maintenance 20 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

4,549

Forks

431

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

29

Dependencies

2

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fla-org/flash-linear-attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.