AlgonetLabs/Cable

Context-aware Biases for Length Extrapolation

33
/ 100
Emerging

This project helps AI researchers and practitioners evaluate how well their language models handle very long texts, even if those models were trained on shorter inputs. It takes pretrained GPT or BERT models and various datasets as input, then measures how accurately these models process and understand much longer sequences of text than they initially saw. The primary users are machine learning engineers and researchers focused on developing and improving large language models.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer developing large language models and need to rigorously test their ability to understand and generate text far beyond the typical length they were trained on.

Not ideal if you are an end-user looking for a ready-to-use application, as this is a research toolkit for model development and evaluation.

large-language-models natural-language-processing model-evaluation AI-research deep-learning
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

22

Forks

8

Language

Python

License

Last pushed

May 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlgonetLabs/Cable"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.