armbues/SiLLM
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
This tool helps researchers, AI developers, and data scientists leverage their Apple Silicon Mac for advanced work with Large Language Models (LLMs). You can take existing LLMs or datasets and train them further using methods like LoRA or DPO, or use them for chat and experimentation. It simplifies running and fine-tuning these models directly on your Mac, rather than needing cloud services or specialized hardware.
284 stars. No commits in the last 6 months.
Use this if you want to experiment with, fine-tune, or run Large Language Models locally on your Apple Silicon device, benefiting from its optimized performance.
Not ideal if you need to train extremely large models from scratch or require distributed training across multiple machines.
Stars
284
Forks
26
Language
Python
License
MIT
Category
Last pushed
Jun 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/armbues/SiLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Blaizzy/mlx-vlm
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac...
b4rtaz/distributed-llama
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM...
microsoft/batch-inference
Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.
armbues/SiLLM-examples
Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on...
kolinko/effort
An implementation of bucketMul LLM inference