SiLLM and SiLLM-examples
SiLLM-examples is a complementary resource to SiLLM, providing practical demonstrations and code examples for users to understand and implement SiLLM's functionalities for training and running LLMs on Apple Silicon.
About SiLLM
armbues/SiLLM
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
This tool helps researchers, AI developers, and data scientists leverage their Apple Silicon Mac for advanced work with Large Language Models (LLMs). You can take existing LLMs or datasets and train them further using methods like LoRA or DPO, or use them for chat and experimentation. It simplifies running and fine-tuning these models directly on your Mac, rather than needing cloud services or specialized hardware.
About SiLLM-examples
armbues/SiLLM-examples
Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon
This collection provides practical examples for fine-tuning and evaluating large language models (LLMs) specifically on Apple Silicon hardware. It helps developers and researchers experiment with different training methods like LoRA and DPO, and benchmark model performance. You can use various datasets and pre-trained models, and the output includes trained models or evaluation metrics like perplexity and MMLU scores.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work