SiLLM and SiLLM-examples

SiLLM-examples is a complementary resource to SiLLM, providing practical demonstrations and code examples for users to understand and implement SiLLM's functionalities for training and running LLMs on Apple Silicon.

SiLLM
43
Emerging
SiLLM-examples
39
Emerging
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 15/25
Maintenance 2/25
Adoption 6/25
Maturity 16/25
Community 15/25
Stars: 284
Forks: 26
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 16
Forks: 4
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About SiLLM

armbues/SiLLM

SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.

This tool helps researchers, AI developers, and data scientists leverage their Apple Silicon Mac for advanced work with Large Language Models (LLMs). You can take existing LLMs or datasets and train them further using methods like LoRA or DPO, or use them for chat and experimentation. It simplifies running and fine-tuning these models directly on your Mac, rather than needing cloud services or specialized hardware.

AI Development Machine Learning Research Natural Language Processing LLM Fine-tuning On-device AI

About SiLLM-examples

armbues/SiLLM-examples

Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon

This collection provides practical examples for fine-tuning and evaluating large language models (LLMs) specifically on Apple Silicon hardware. It helps developers and researchers experiment with different training methods like LoRA and DPO, and benchmark model performance. You can use various datasets and pre-trained models, and the output includes trained models or evaluation metrics like perplexity and MMLU scores.

LLM training model fine-tuning model evaluation machine learning engineering natural language processing

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work