linkedin/Liger-Kernel

Efficient Triton Kernels for LLM Training

77
/ 100
Verified

This project helps machine learning engineers efficiently train large language models (LLMs). It takes existing LLM models, like those from Hugging Face, and optimizes their core computational components. The result is significantly faster training times and reduced memory consumption on multi-GPU setups, allowing for larger models and datasets. It is ideal for those actively involved in developing and fine-tuning LLMs.

6,206 stars. Used by 3 other packages. Actively maintained with 26 commits in the last 30 days. Available on PyPI.

Use this if you are training large language models and want to speed up your training process while using less GPU memory, enabling you to work with longer contexts or larger batch sizes.

Not ideal if you are not directly involved in LLM development or if you are looking for a pre-trained model rather than a tool to optimize model training.

LLM training GPU optimization machine learning engineering model fine-tuning deep learning performance
Maintenance 20 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

6,206

Forks

500

Language

Python

License

BSD-2-Clause

Last pushed

Mar 13, 2026

Commits (30d)

26

Dependencies

2

Reverse dependents

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/linkedin/Liger-Kernel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.