mlx-vlm and SiLLM

MLX-VLM extends MLX's capabilities to multimodal vision-language tasks, while SiLLM provides a higher-level abstraction layer for training and deploying text-only LLMs on the same MLX foundation, making them complementary tools serving different model types within the Apple Silicon ecosystem.

mlx-vlm
81
Verified
SiLLM
43
Emerging
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 21/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 15/25
Stars: 2,287
Forks: 293
Downloads:
Commits (30d): 44
Language: Python
License: MIT
Stars: 284
Forks: 26
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About mlx-vlm

Blaizzy/mlx-vlm

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

This project helps you understand images, audio, and video content by describing or answering questions about them. You provide a visual, audio, or multi-modal input and a question or prompt, and the tool generates a textual response. It's designed for anyone working with multimedia content on a Mac who needs to extract information or generate descriptions.

multimedia-analysis content-understanding image-description audio-analysis document-processing

About SiLLM

armbues/SiLLM

SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.

This tool helps researchers, AI developers, and data scientists leverage their Apple Silicon Mac for advanced work with Large Language Models (LLMs). You can take existing LLMs or datasets and train them further using methods like LoRA or DPO, or use them for chat and experimentation. It simplifies running and fine-tuning these models directly on your Mac, rather than needing cloud services or specialized hardware.

AI Development Machine Learning Research Natural Language Processing LLM Fine-tuning On-device AI

Scores updated daily from GitHub, PyPI, and npm data. How scores work