XunhaoLai/native-sparse-attention-triton

Efficient triton implementation of Native Sparse Attention.

41
/ 100
Emerging

This project provides an optimized way to train and use large language models (LLMs) more efficiently. It takes in sequence data for your LLM and outputs the results of an attention mechanism that is much faster than traditional methods. This is for researchers and engineers working on developing or deploying LLMs who need to process long sequences of text or other data quickly.

269 stars. No commits in the last 6 months.

Use this if you are developing or fine-tuning large language models and need to accelerate attention computations for both training and inference, especially with long input sequences.

Not ideal if you are a casual user of existing LLMs and do not need to implement or optimize the underlying attention mechanisms.

large-language-models natural-language-processing deep-learning machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

269

Forks

19

Language

Python

License

Apache-2.0

Last pushed

May 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/XunhaoLai/native-sparse-attention-triton"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.