N1k1tung/infer-ring

Infer Ring is an iOS and macOS app that facilitates cross-device LLM inference using MLX

34
/ 100
Emerging

Infer Ring helps you run large AI models, known as LLMs, directly on your Apple devices even if a single device doesn't have enough memory. It takes the model you want to use and distributes it across multiple iPhones, iPads, and Macs, letting you interact with larger models locally. This is for researchers, developers, or hobbyists who want to experiment with large AI models without needing powerful cloud servers.

Use this if you want to run powerful large language models locally on your Apple devices by combining their memory, rather than relying on expensive cloud services.

Not ideal if you need extremely fast token generation speeds for real-time applications, as performance might be slightly slower compared to a single, very powerful machine.

AI-model-deployment local-AI edge-AI LLM-experimentation distributed-computing
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Swift

License

MIT

Last pushed

Feb 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/N1k1tung/infer-ring"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.