unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.
This tool helps AI engineers and researchers efficiently customize large language models (LLMs) and other AI models for specific tasks. You can input various data formats like PDFs, CSVs, and DOCX files to fine-tune models such as GPT-OSS, Llama, or Gemma. The output is a specialized AI model that performs better on your unique data, with significantly faster training and reduced memory use.
53,879 stars. Used by 8 other packages. Actively maintained with 453 commits in the last 30 days. Available on PyPI.
Use this if you need to rapidly train and deploy powerful text, audio, embedding, or vision AI models with limited GPU resources, or if you want a unified interface to manage your model development.
Not ideal if you primarily work with pre-trained models without any need for customization or advanced training techniques.
Stars
53,879
Forks
4,503
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
453
Dependencies
23
Reverse dependents
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/unslothai/unsloth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related models
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)