john-rocky/EdgeLLM

Simple LLM package for ios devices.

36
/ 100
Emerging

This package helps iOS and macOS developers easily add powerful AI chat capabilities directly into their apps. It takes user input, processes it using various language models like Qwen, Gemma, or Phi-3 running entirely on the device, and outputs text responses. Anyone building mobile or desktop applications for Apple devices that need offline, private, and fast AI text generation or conversation can use this.

No commits in the last 6 months.

Use this if you are an iOS/macOS developer looking to integrate large language models (LLMs) directly into your applications, allowing them to run offline, prioritize user privacy, and deliver fast, AI-driven experiences without cloud dependencies.

Not ideal if you need to run AI models on server-side infrastructure, integrate with non-Apple platforms, or require access to extremely large, cutting-edge LLMs that cannot be efficiently run on edge devices.

iOS-app-development macOS-app-development on-device-AI mobile-app-features offline-AI
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 12 / 25

How are scores calculated?

Stars

30

Forks

4

Language

Swift

License

Apache-2.0

Last pushed

Jul 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/john-rocky/EdgeLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.