Infini-AI-Lab/vortex_torch
Vortex: A Flexible and Efficient Sparse Attention Framework
Vortex helps AI researchers and engineers develop and deploy custom sparse attention algorithms for large language models (LLMs). It takes in your specifications for a new sparse attention pattern and outputs highly optimized code that runs efficiently on modern inference systems. This tool is for individuals focused on advancing LLM efficiency through novel attention mechanisms.
Use this if you need to rapidly prototype, extend, or deploy custom sparse attention algorithms for LLM inference without dealing with low-level optimizations.
Not ideal if you are an end-user of LLMs and not involved in their underlying architectural research or engineering.
Stars
49
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Infini-AI-Lab/vortex_torch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fla-org/flash-linear-attention
🚀 Efficient implementations of state-of-the-art linear attention models
thu-ml/SageAttention
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x...
thu-ml/SpargeAttn
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
fla-org/flame
🔥 A minimal training framework for scaling FLA models
foundation-model-stack/fms-fsdp
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for...