huggingface/datablations

Scaling Data-Constrained Language Models

40
/ 100
Emerging

This project helps machine learning researchers and practitioners efficiently train large language models, especially when data is limited. It provides preprocessed datasets, trained models, and experimental results, allowing users to understand the impact of data repetition, quality filtering, and code augmentation on model performance and resource usage. Researchers can use this to optimize their model training strategies for different data constraints.

342 stars. No commits in the last 6 months.

Use this if you are a researcher or practitioner focused on training large language models and need to understand how to maximize performance with limited or imperfect data resources.

Not ideal if you are looking for a plug-and-play solution for general language model fine-tuning without deep involvement in data preprocessing or model scaling research.

Language Model Training Data Efficiency Deep Learning Research Model Scaling NLP Resource Optimization
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

342

Forks

18

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jun 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/huggingface/datablations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.