omlx and asiai
The LLM inference server is a complement to the multi-engine LLM benchmark and monitoring CLI, as the server provides continuous batching and SSD caching for inference that can then be benchmarked and monitored by the CLI tool.
About omlx
jundot/omlx
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar
oMLX helps individual developers and power users on Apple Silicon Macs efficiently run and manage large language models (LLMs) and vision-language models (VLMs) directly on their machines. It takes a model file and provides a local API endpoint and a web dashboard, allowing you to interact with models for tasks like code generation, content creation, or image analysis. This is for developers or technical users who want to run powerful AI models locally without relying on cloud services.
About asiai
druide67/asiai
Multi-engine LLM benchmark & monitoring CLI for Apple Silicon
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work