jeffreysijuntan/lloco
The official repo for "LLoCo: Learning Long Contexts Offline"
This project helps AI/ML researchers and engineers train large language models (LLMs) to understand and process very long documents more efficiently. It takes in raw text documents and pre-processed summary embeddings, then outputs a finetuned LLM capable of handling extended contexts without excessive computational cost. This is for users working on advanced natural language processing tasks who need to optimize LLM performance for lengthy inputs.
118 stars. No commits in the last 6 months.
Use this if you need to train or finetune large language models to process and understand extremely long text documents, such as research papers or legal briefs, more effectively and with reduced computational demands.
Not ideal if you are looking for a pre-trained, ready-to-use application or if your primary need is for general-purpose language generation on shorter texts.
Stars
118
Forks
8
Language
Python
License
MIT
Category
Last pushed
Jun 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jeffreysijuntan/lloco"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.