UKPLab/arxiv2025-inherent-limits-plms

Code repository for the paper "The Inherent Limits of Pretrained LLMs: The Unexpected Convergence of Instruction Tuning and In-Context Learning Capabilities"

21
/ 100
Experimental

This project helps researchers and machine learning practitioners understand the underlying capabilities of large language models (LLMs). It takes information about how an LLM was trained (e.g., instruction-tuned or not) and specific tasks, then provides insights into whether instruction tuning fundamentally changes an LLM's abilities or simply makes pre-existing knowledge more accessible. This is useful for those evaluating or designing LLM training strategies.

No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer trying to decipher the true impact of instruction tuning on LLM performance versus the inherent capabilities from pre-training.

Not ideal if you are an end-user simply looking to apply or fine-tune an LLM for a specific real-world application without delving into its foundational training mechanisms.

AI-research LLM-evaluation NLP-benchmarking model-training-strategy computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

Apache-2.0

Last pushed

Jan 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UKPLab/arxiv2025-inherent-limits-plms"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.