haoliuhl/ringattention
Large Context Attention
This is a specialized tool for AI/ML engineers working with large language models. It helps train models with extremely long input sequences—tens of millions of tokens—that would normally exceed GPU memory limits. By distributing computations across multiple devices (GPUs/TPUs) and overlapping data transfer with processing, it allows for significantly larger context windows in transformer models.
770 stars. No commits in the last 6 months.
Use this if you are an AI/ML engineer needing to train transformer models on datasets with context lengths far exceeding typical GPU memory capacities, such as sequences with millions of tokens.
Not ideal if you are not an AI/ML engineer, are working with smaller language models, or are not comfortable with advanced JAX and distributed computing concepts.
Stars
770
Forks
52
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/haoliuhl/ringattention"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action