unsloth and LlamaFactory

UnSloth optimizes the computational efficiency of fine-tuning through faster training and reduced VRAM usage, while LlamaFactory provides the unified framework and model support for configuring and executing those fine-tuning jobs, making them complementary tools that work together in a fine-tuning pipeline.

unsloth
81
Verified
LlamaFactory
67
Established
Maintenance 22/25
Adoption 15/25
Maturity 25/25
Community 19/25
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 53,879
Forks: 4,503
Downloads:
Commits (30d): 453
Language: Python
License: Apache-2.0
Stars: 68,347
Forks: 8,346
Downloads:
Commits (30d): 21
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About unsloth

unslothai/unsloth

Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.

This tool helps AI engineers and researchers efficiently customize large language models (LLMs) and other AI models for specific tasks. You can input various data formats like PDFs, CSVs, and DOCX files to fine-tune models such as GPT-OSS, Llama, or Gemma. The output is a specialized AI model that performs better on your unique data, with significantly faster training and reduced memory use.

AI model training natural language processing machine learning engineering deep learning optimization AI research

About LlamaFactory

hiyouga/LlamaFactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

This tool helps researchers, data scientists, and ML engineers customize large language models for specific tasks. You input an existing large language model and your own specialized dataset, and it outputs a fine-tuned model that performs better on your unique data or problem. It's designed for anyone who needs to adapt powerful AI models without deep programming.

AI-model-customization natural-language-processing computational-linguistics machine-learning-engineering multimodal-AI

Scores updated daily from GitHub, PyPI, and npm data. How scores work