tomaarsen/attention_sinks

Extend existing LLMs way beyond the original training length with constant memory usage, without retraining

49
/ 100
Emerging

This project helps developers extend the capabilities of large language models (LLMs) to generate much longer, more coherent text without running out of memory. By modifying how LLMs process information, it allows them to maintain fluency indefinitely, even when generating thousands or millions of words. It takes existing pre-trained LLMs as input and outputs the same model with enhanced long-context generation abilities.

736 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are a developer building applications with LLMs and need them to handle extremely long conversations, documents, or continuous text generation without performance degradation or memory issues.

Not ideal if you are an end-user without programming knowledge, or if your LLM applications only require processing short prompts and responses.

LLM development natural language generation long-context processing AI model optimization chatbot development
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 14 / 25

How are scores calculated?

Stars

736

Forks

45

Language

Python

License

Apache-2.0

Last pushed

Apr 10, 2024

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tomaarsen/attention_sinks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.