skylight-org/sparse-attention-hub

Advancing the frontier of efficient AI

43
/ 100
Emerging

This framework helps AI researchers and developers working with large language models efficiently manage and evaluate the 'attention' mechanism. It takes in various sparse attention configurations and a chosen large language model, then outputs performance benchmarks across multiple long-context datasets. The primary users are AI research engineers and machine learning scientists focused on optimizing transformer models for longer text inputs.

Use this if you need to experiment with, implement, and rigorously benchmark different sparse attention algorithms for large language models to improve efficiency and performance on long-context tasks.

Not ideal if you are an end-user of an AI application and not directly involved in the research and development of transformer model architectures.

AI-research natural-language-processing large-language-models model-optimization machine-learning-engineering
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

54

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/skylight-org/sparse-attention-hub"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.