sanyalsunny111/Early_Weight_Avg
[COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training
This project helps machine learning engineers and researchers pre-train large language models (LLMs) more efficiently. It takes your training data and model configurations, then applies an 'Early Weight Averaging' technique to produce a better-performing LLM in less time. This is ideal for those focused on developing and optimizing LLMs.
No commits in the last 6 months.
Use this if you are pre-training large language models and want to accelerate convergence and improve generalization without incurring higher training costs.
Not ideal if you are working with already-trained models or smaller machine learning tasks outside of LLM pre-training.
Stars
19
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sanyalsunny111/Early_Weight_Avg"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AI-Hypercomputer/maxtext
A simple, performant and scalable Jax LLM!
rasbt/reasoning-from-scratch
Implement a reasoning LLM in PyTorch from scratch, step by step
mindspore-lab/mindnlp
MindSpore + 🤗Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless...
mosaicml/llm-foundry
LLM training code for Databricks foundation models
rickiepark/llm-from-scratch
<밑바닥부터 만들면서 공부하는 LLM>(길벗, 2025)의 코드 저장소