mlx-vlm and mlx-flash

The `mlx-flash` project provides performance optimizations that could potentially be integrated into or used by `mlx-vlm` to accelerate its Vision Language Model inference and fine-tuning on Apple Silicon, making them complements.

mlx-vlm
81
Verified
mlx-flash
28
Experimental
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 21/25
Maintenance 13/25
Adoption 6/25
Maturity 9/25
Community 0/25
Stars: 2,287
Forks: 293
Downloads:
Commits (30d): 44
Language: Python
License: MIT
Stars: 18
Forks:
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No Package No Dependents

About mlx-vlm

Blaizzy/mlx-vlm

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

This project helps you understand images, audio, and video content by describing or answering questions about them. You provide a visual, audio, or multi-modal input and a question or prompt, and the tool generates a textual response. It's designed for anyone working with multimedia content on a Mac who needs to extract information or generate descriptions.

multimedia-analysis content-understanding image-description audio-analysis document-processing

About mlx-flash

matt-k-wong/mlx-flash

Lightning-fast MLX utilities and optimizations for Apple Silicon

This project enables you to run very large AI models, like those with tens or hundreds of billions of parameters, directly on your Apple Mac, even if it has limited memory. It takes an existing large language model and efficiently streams its components from your Mac's fast storage, allowing you to get immediate text generation or analysis without needing to shrink or alter the model. This is ideal for AI practitioners, researchers, or developers who want to experiment with or deploy large models locally on their Apple Silicon machines.

large-language-models on-device-ai ai-model-deployment apple-silicon-ml ml-research

Scores updated daily from GitHub, PyPI, and npm data. How scores work