armbues/SiLLM

SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.

43
/ 100
Emerging

This tool helps researchers, AI developers, and data scientists leverage their Apple Silicon Mac for advanced work with Large Language Models (LLMs). You can take existing LLMs or datasets and train them further using methods like LoRA or DPO, or use them for chat and experimentation. It simplifies running and fine-tuning these models directly on your Mac, rather than needing cloud services or specialized hardware.

284 stars. No commits in the last 6 months.

Use this if you want to experiment with, fine-tune, or run Large Language Models locally on your Apple Silicon device, benefiting from its optimized performance.

Not ideal if you need to train extremely large models from scratch or require distributed training across multiple machines.

AI Development Machine Learning Research Natural Language Processing LLM Fine-tuning On-device AI
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

284

Forks

26

Language

Python

License

MIT

Last pushed

Jun 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/armbues/SiLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.