tomaarsen/attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
This project helps developers extend the capabilities of large language models (LLMs) to generate much longer, more coherent text without running out of memory. By modifying how LLMs process information, it allows them to maintain fluency indefinitely, even when generating thousands or millions of words. It takes existing pre-trained LLMs as input and outputs the same model with enhanced long-context generation abilities.
736 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a developer building applications with LLMs and need them to handle extremely long conversations, documents, or continuous text generation without performance degradation or memory issues.
Not ideal if you are an end-user without programming knowledge, or if your LLM applications only require processing short prompts and responses.
Stars
736
Forks
45
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 10, 2024
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tomaarsen/attention_sinks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action