vmarinowski/infini-attention
An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'
This project provides an alternative attention mechanism for deep learning models, particularly Transformers, to process very long sequences of information more efficiently. It takes in raw input sequences and processes them using a novel attention technique that combines short-term and long-term memory. Deep learning researchers and engineers working on large language models or other sequence-to-sequence tasks would use this.
No commits in the last 6 months.
Use this if you are a deep learning practitioner building or experimenting with Transformer models and need to handle extremely long input sequences without losing critical context, or if you want to explore more efficient attention mechanisms.
Not ideal if you are not working with deep learning models or do not have a strong understanding of Transformer architectures and attention mechanisms.
Stars
55
Forks
9
Language
Python
License
—
Category
Last pushed
Aug 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/vmarinowski/infini-attention"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action