alexzhang13/flashattention2-custom-mask
Triton implementation of FlashAttention2 that adds Custom Masks.
This project helps machine learning engineers or researchers working with transformer models. It addresses the limitation of standard FlashAttention implementations by allowing you to define and use arbitrary custom attention masks. You provide your model's query, key, and value tensors along with a custom mask, and it outputs the attention result, enabling more flexible model architectures without sacrificing efficiency.
170 stars. No commits in the last 6 months.
Use this if you need to apply non-standard or complex masking patterns within your transformer attention layers and want to maintain the computational efficiency of FlashAttention.
Not ideal if your attention needs only involve causal masking, as standard FlashAttention implementations handle this efficiently without requiring an explicit mask input.
Stars
170
Forks
16
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alexzhang13/flashattention2-custom-mask"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
gpu-mode/Triton-Puzzles
Puzzles for learning Triton
hailo-ai/hailo_model_zoo
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment
open-mmlab/mmdeploy
OpenMMLab Model Deployment Framework
hyperai/tvm-cn
TVM Documentation in Chinese Simplified / TVM 中文文档