m-a-n-i-f-e-s-t/retention

Language modeling with linear-cost context

41
/ 100
Emerging

This project offers a specialized PyTorch layer that helps researchers and developers build large language models more efficiently. It processes long sequences of text data, like code or extensive documents, and outputs an optimized representation that requires less computational power. This is ideal for those working on advanced AI applications who need to manage very long text contexts without prohibitive costs.

117 stars. No commits in the last 6 months.

Use this if you are developing large language models and need to process extremely long text sequences efficiently for both training and real-time text generation.

Not ideal if you are working with short text snippets, don't have access to CUDA-enabled GPUs, or are not building custom deep learning models.

large-language-models natural-language-processing deep-learning-infrastructure computational-efficiency
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 14 / 25

How are scores calculated?

Stars

117

Forks

14

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Sep 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/m-a-n-i-f-e-s-t/retention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.