DAMO-NLP-SG/CLEX

[ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models

39
/ 100
Emerging

This project offers enhanced large language models that can handle much longer texts without losing accuracy. It takes existing models like LLaMA-2 or Mixtral and modifies them to process inputs up to 8 times longer than their original training. This is useful for anyone working with very long documents, conversations, or codebases who needs a language model to maintain context across extensive content.

No commits in the last 6 months.

Use this if you need a language model to analyze, summarize, or generate text from extremely long documents, like entire books, lengthy research papers, or extensive legal contracts, without encountering context window limitations.

Not ideal if your primary use case involves short, conversational queries or if you require a language model that has not been specifically re-trained or fine-tuned for extended context.

document-analysis long-form-content language-model-finetuning text-generation information-retrieval
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

78

Forks

11

Language

Python

License

MIT

Last pushed

Mar 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DAMO-NLP-SG/CLEX"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.