LSquaredM/mutual_info_scaling_law

(NeurIPS 2025) Official Code for L²M: Mutual Information Scaling Law for Long-Context Language Modeling

27
/ 100
Experimental

This project helps AI researchers understand how effectively large language models (LLMs) process and retain information over very long texts. It takes text data and LLM configurations as input, then measures how much information the model captures about distant parts of the text. The primary users are researchers and practitioners focused on improving the performance of LLMs in handling extended contexts.

Use this if you are a researcher studying the theoretical underpinnings of long-context language models and want to reproduce or extend experiments on mutual information scaling laws.

Not ideal if you are looking for a tool to directly fine-tune or deploy an LLM for a specific application.

large-language-models natural-language-processing-research ai-model-evaluation machine-learning-theory long-context-ai
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

Last pushed

Nov 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/LSquaredM/mutual_info_scaling_law"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.