riccardomusmeci/mlx-llm

Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.

39
/ 100
Emerging

This project helps developers build and run Large Language Model (LLM) applications directly on their Apple Silicon Macs. It takes various pre-trained LLMs and allows you to load them, fine-tune them with quantization, and extract text embeddings. The primary user is a machine learning engineer or data scientist working with LLMs on Apple hardware.

459 stars. No commits in the last 6 months.

Use this if you are a developer looking to experiment with, integrate, or deploy LLMs efficiently on Apple Silicon, leveraging its dedicated neural engine for real-time performance.

Not ideal if you need a high-level, no-code solution for interacting with LLMs or if you are working exclusively with cloud-based GPU infrastructure.

machine-learning-engineering natural-language-processing on-device-AI model-deployment AI-application-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

459

Forks

28

Language

Python

License

Last pushed

Jan 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/riccardomusmeci/mlx-llm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.