Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
This tool helps AI practitioners customize Large Language Models (LLMs) to better fit specific tasks or user preferences, directly on Apple Silicon Macs. You provide a base LLM and your own dataset of examples or preferences, and the tool outputs a fine-tuned LLM that behaves more like your data. It's designed for machine learning engineers, data scientists, or researchers who need to adapt existing open-source LLMs.
284 stars. Available on PyPI.
Use this if you need to fine-tune a Large Language Model on your specific data for tasks like instruction following or aligning with human preferences, and you want to do it locally on an Apple Silicon device.
Not ideal if you don't work with Large Language Models, don't have access to an Apple Silicon Mac, or are looking for a no-code solution.
Stars
284
Forks
40
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Dependencies
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Goekdeniz-Guelmez/mlx-lm-lora"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.
SmallDoges/small-doge
Doge Family of Small Language Models