haoliuhl/ringattention

Large Context Attention

43
/ 100
Emerging

This is a specialized tool for AI/ML engineers working with large language models. It helps train models with extremely long input sequences—tens of millions of tokens—that would normally exceed GPU memory limits. By distributing computations across multiple devices (GPUs/TPUs) and overlapping data transfer with processing, it allows for significantly larger context windows in transformer models.

770 stars. No commits in the last 6 months.

Use this if you are an AI/ML engineer needing to train transformer models on datasets with context lengths far exceeding typical GPU memory capacities, such as sequences with millions of tokens.

Not ideal if you are not an AI/ML engineer, are working with smaller language models, or are not comfortable with advanced JAX and distributed computing concepts.

Large Language Models Distributed AI Training Transformer Architectures Deep Learning Infrastructure High-Performance Computing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

770

Forks

52

Language

Python

License

Apache-2.0

Last pushed

Oct 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/haoliuhl/ringattention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.