AlgonetLabs/Cable
Context-aware Biases for Length Extrapolation
This project helps AI researchers and practitioners evaluate how well their language models handle very long texts, even if those models were trained on shorter inputs. It takes pretrained GPT or BERT models and various datasets as input, then measures how accurately these models process and understand much longer sequences of text than they initially saw. The primary users are machine learning engineers and researchers focused on developing and improving large language models.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer developing large language models and need to rigorously test their ability to understand and generate text far beyond the typical length they were trained on.
Not ideal if you are an end-user looking for a ready-to-use application, as this is a research toolkit for model development and evaluation.
Stars
22
Forks
8
Language
Python
License
—
Category
Last pushed
May 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlgonetLabs/Cable"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZHZisZZ/dllm
dLLM: Simple Diffusion Language Modeling
pengzhangzhi/Open-dLLM
Open diffusion language model for code generation — releasing pretraining, evaluation,...
EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM...
THUDM/LongWriter
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
AIoT-MLSys-Lab/SVD-LLM
[ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2