THUDM/LongAlign

[EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs

39
/ 100
Emerging

If you're building or fine-tuning large language models (LLMs) and need them to handle very long texts, LongAlign provides a complete toolkit. It helps improve an LLM's ability to understand and respond accurately to queries based on documents that are tens of thousands of words long. This is for AI engineers or researchers who are working on developing advanced LLMs for real-world applications.

259 stars. No commits in the last 6 months.

Use this if you are developing or fine-tuning an LLM and want to significantly extend its capability to process and understand very long documents and conversations, up to 100,000 tokens.

Not ideal if you are looking for a pre-trained, ready-to-use LLM without needing to train or fine-tune models yourself.

LLM training long-context AI natural language processing AI model alignment deep learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

259

Forks

21

Language

Python

License

Apache-2.0

Last pushed

Dec 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/THUDM/LongAlign"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.