uncSoft/anubis-oss
Local LLM Testing & Benchmarking for Apple Silicon
Anubis helps developers, researchers, or anyone experimenting with large language models (LLMs) understand how different models perform on their Apple Silicon Mac. It takes local LLMs (like those run with Ollama or LM Studio) and provides detailed, real-time metrics on inference speed, CPU/GPU usage, and power consumption. The output is comprehensive performance data, charts, and comparison reports, allowing you to easily see which LLM runs best on your specific hardware configuration.
Use this if you are developing with or evaluating local large language models on an Apple Silicon Mac and need to systematically benchmark their performance and resource usage.
Not ideal if you are solely looking for a chat interface for LLMs or if you are not using an Apple Silicon Mac.
Stars
68
Forks
4
Language
Swift
License
GPL-3.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/uncSoft/anubis-oss"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jundot/omlx
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the...
josStorer/RWKV-Runner
A RWKV management and startup tool, full automation, only 8MB. And provides an interface...
waybarrios/vllm-mlx
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models...
jordanhubbard/nanolang
A tiny experimental language designed to be targeted by coding LLMs
akivasolutions/tightwad
Pool your CUDA + ROCm GPUs into one OpenAI-compatible API. Speculative decoding proxy gives you...