Relaxed-System-Lab/Flash-Sparse-Attention

🚀🚀 Efficient implementations of Native Sparse Attention

36
/ 100
Emerging

This project offers an optimized way to train and run large language models (LLMs) more efficiently. It takes in standard LLM input data and processes it using a more performant attention mechanism, leading to faster computations and reduced memory use. Developers and AI engineers working on LLM training and deployment, especially those dealing with models requiring sparse attention, would find this useful.

983 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to speed up the training and inference of large language models, particularly those using sparse attention mechanisms on NVIDIA GPUs.

Not ideal if you are working with non-LLM models, do not require sparse attention, or are not using NVIDIA GPUs.

Large-Language-Models Deep-Learning-Optimization AI-Infrastructure Model-Training GPU-Computing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

983

Forks

14

Language

Python

License

Apache-2.0

Last pushed

Sep 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Relaxed-System-Lab/Flash-Sparse-Attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.