oumi and LlamaFactory
These are competitors offering overlapping functionality—both provide unified fine-tuning frameworks for multiple open-source LLMs/VLMs using LoRA/QLoRA, though LlamaFactory supports a significantly larger model zoo (100+) while Oumi emphasizes ease of deployment alongside fine-tuning.
About oumi
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
This project helps AI developers and machine learning engineers fine-tune, evaluate, and deploy large language models (LLMs) and vision-language models (VLMs) for various applications. It takes raw data and an existing open-source model, and outputs a specialized, ready-to-use AI model tailored to specific tasks. This is for professionals building custom AI solutions, from initial training to production deployment.
About LlamaFactory
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
This tool helps researchers, data scientists, and ML engineers customize large language models for specific tasks. You input an existing large language model and your own specialized dataset, and it outputs a fine-tuned model that performs better on your unique data or problem. It's designed for anyone who needs to adapt powerful AI models without deep programming.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work