MaxLSB/flash-attn2

FlashAttention for sliding window attention in Triton (fwd + bwd pass)

30
/ 100
Emerging

This project helps machine learning engineers accelerate the core 'attention' mechanism in large language models. It takes your model's attention computations and processes them much faster on NVIDIA GPUs. The result is significantly quicker training and inference for models that use sliding window, global, or causal attention, making your LLM workflows more efficient.

No commits in the last 6 months.

Use this if you are developing or training large language models and need to speed up the attention computation on NVIDIA GPUs, especially for models employing sliding window attention.

Not ideal if you are not working with large language models, do not have access to NVIDIA GPUs, or require features like dropout support or other attention mechanisms.

large-language-models deep-learning-optimization gpu-acceleration natural-language-processing neural-network-training
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

MIT

Last pushed

Jun 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MaxLSB/flash-attn2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.