santos-sanz/mlx-lora-finetune-template
Template for fine-tuning LLMs with LoRA using Apple MLX on Mac Silicon
This tool helps researchers, data scientists, and AI hobbyists customize large language models (LLMs) for specific tasks using their Mac with Apple Silicon. You provide your own text data (JSON, plain text, or folders of documents), and it outputs a fine-tuned language model ready for specific applications. It’s ideal for adapting a general model to understand a unique dataset or speak in a particular style.
Use this if you need to quickly and efficiently specialize a small language model on your Mac for a particular domain or task without deep technical expertise.
Not ideal if you plan to fine-tune very large models (over 6 billion parameters) or if you do not have an Apple Silicon Mac.
Stars
9
Forks
1
Language
Python
License
MIT
Category
Last pushed
Feb 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/santos-sanz/mlx-lora-finetune-template"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence...
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets...
YalaLab/pillar-finetune
Finetuning framework for Pillar medical imaging models.
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML...