kyegomez/LongNet

Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"

51
/ 100
Established

This project offers a way for machine learning engineers to work with extremely long text or data sequences. It takes existing Transformer models and allows them to process inputs up to a billion tokens, a significant leap beyond typical limits. This helps researchers and developers who are building large language models or other sequence-based AI systems that need to analyze vast amounts of information.

714 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer or researcher developing Transformer-based models and need to process sequences far longer than traditional methods allow, such as an entire book or a vast dataset.

Not ideal if you are not a machine learning practitioner working with deep learning models, or if your tasks only involve short to medium length text sequences.

large-language-models natural-language-processing sequence-modeling AI-model-development machine-learning-research
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 16 / 25

How are scores calculated?

Stars

714

Forks

61

Language

Python

License

Apache-2.0

Last pushed

Jan 07, 2024

Commits (30d)

0

Dependencies

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/LongNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.