linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training
This project helps machine learning engineers efficiently train large language models (LLMs). It takes existing LLM models, like those from Hugging Face, and optimizes their core computational components. The result is significantly faster training times and reduced memory consumption on multi-GPU setups, allowing for larger models and datasets. It is ideal for those actively involved in developing and fine-tuning LLMs.
6,206 stars. Used by 3 other packages. Actively maintained with 26 commits in the last 30 days. Available on PyPI.
Use this if you are training large language models and want to speed up your training process while using less GPU memory, enabling you to work with longer contexts or larger batch sizes.
Not ideal if you are not directly involved in LLM development or if you are looking for a pre-trained model rather than a tool to optimize model training.
Stars
6,206
Forks
500
Language
Python
License
BSD-2-Clause
Category
Last pushed
Mar 13, 2026
Commits (30d)
26
Dependencies
2
Reverse dependents
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/linkedin/Liger-Kernel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related models
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)