armbues/SiLLM-examples

Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon

39
/ 100
Emerging

This collection provides practical examples for fine-tuning and evaluating large language models (LLMs) specifically on Apple Silicon hardware. It helps developers and researchers experiment with different training methods like LoRA and DPO, and benchmark model performance. You can use various datasets and pre-trained models, and the output includes trained models or evaluation metrics like perplexity and MMLU scores.

No commits in the last 6 months.

Use this if you are an AI/ML developer or researcher working with large language models on Apple Silicon and need hands-on examples for training, fine-tuning, or benchmarking model performance.

Not ideal if you are an end-user looking for a ready-to-use application, or if you are not working with Apple Silicon hardware.

LLM training model fine-tuning model evaluation machine learning engineering natural language processing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

16

Forks

4

Language

Python

License

MIT

Last pushed

May 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/armbues/SiLLM-examples"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.