DaveAldon/Distributed-ML-with-MLX
🍎👉🍏 Everything you need in order to get started building distributed machine learning with Apple's MLX
This project helps you customize large language models (LLMs) to perform specialized tasks or speak in a specific style, like teaching a chatbot to talk like a pirate. It takes an existing LLM and a dataset of examples as input, and outputs a fine-tuned model or adapter that behaves as desired. This is ideal for researchers, educators, or small business owners who want to personalize AI models without needing a supercomputer.
No commits in the last 6 months.
Use this if you need to fine-tune an AI model on Apple Silicon Macs but find the training process too slow or the model too large for a single machine.
Not ideal if you don't have multiple Apple Silicon Macs or a Thunderbolt connection for high-speed data transfer between machines.
Stars
17
Forks
1
Language
Python
License
—
Category
Last pushed
Aug 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DaveAldon/Distributed-ML-with-MLX"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.