john-rocky/EdgeLLM
Simple LLM package for ios devices.
This package helps iOS and macOS developers easily add powerful AI chat capabilities directly into their apps. It takes user input, processes it using various language models like Qwen, Gemma, or Phi-3 running entirely on the device, and outputs text responses. Anyone building mobile or desktop applications for Apple devices that need offline, private, and fast AI text generation or conversation can use this.
No commits in the last 6 months.
Use this if you are an iOS/macOS developer looking to integrate large language models (LLMs) directly into your applications, allowing them to run offline, prioritize user privacy, and deliver fast, AI-driven experiences without cloud dependencies.
Not ideal if you need to run AI models on server-side infrastructure, integrate with non-Apple platforms, or require access to extremely large, cutting-edge LLMs that cannot be efficiently run on edge devices.
Stars
30
Forks
4
Language
Swift
License
Apache-2.0
Category
Last pushed
Jul 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/john-rocky/EdgeLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)