alexzhang13/flashattention2-custom-mask

Triton implementation of FlashAttention2 that adds Custom Masks.

39
/ 100
Emerging

This project helps machine learning engineers or researchers working with transformer models. It addresses the limitation of standard FlashAttention implementations by allowing you to define and use arbitrary custom attention masks. You provide your model's query, key, and value tensors along with a custom mask, and it outputs the attention result, enabling more flexible model architectures without sacrificing efficiency.

170 stars. No commits in the last 6 months.

Use this if you need to apply non-standard or complex masking patterns within your transformer attention layers and want to maintain the computational efficiency of FlashAttention.

Not ideal if your attention needs only involve causal masking, as standard FlashAttention implementations handle this efficiently without requiring an explicit mask input.

transformer-models attention-mechanisms deep-learning-research large-language-models neural-network-architecture
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

170

Forks

16

Language

Python

License

Apache-2.0

Last pushed

Aug 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alexzhang13/flashattention2-custom-mask"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.