jjang-ai/vmlx
vMLX - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth
vMLX transforms your Apple Silicon Mac into a powerful local AI engine, letting you run large language models and image generation models right on your machine. You can input text prompts or images and receive generated text, code, or new images, all without sending data to the cloud. This is designed for researchers, developers, and power users who need to work with advanced AI models privately and efficiently on their macOS devices.
Available on PyPI.
Use this if you need to run cutting-edge AI models like LLMs, VLMs, or image generators directly on your Apple Silicon Mac, ensuring privacy and control over your data.
Not ideal if you don't have an Apple Silicon Mac, or if your projects require massive scale-out across many GPUs beyond a single machine.
Stars
15
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Dependencies
19
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/jjang-ai/vmlx"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
topoteretes/cognee
Knowledge Engine for AI Agent Memory in 6 lines of code
CaviraOSS/OpenMemory
Local persistent memory store for LLM applications including claude desktop, github copilot,...
verygoodplugins/automem
AutoMem is a graph-vector memory service that gives AI assistants durable, relational memory:
CortexReach/memory-lancedb-pro
Enhanced LanceDB memory plugin for OpenClaw — Hybrid Retrieval (Vector + BM25), Cross-Encoder...
divagr18/memlayer
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory...