eastriverlee/LLM.swift
LLM.swift is a simple and readable library that allows you to interact with large language models locally with ease for macOS, iOS, watchOS, tvOS, and visionOS.
This is a tool for Apple developers to build applications that run large language models (LLMs) directly on their users' Apple devices (macOS, iOS, watchOS, tvOS, and visionOS). It helps by taking a local LLM file or a model from Hugging Face and integrating it into an app, then providing user input to generate AI responses. This is for app developers who want to add offline AI capabilities to their Apple applications.
829 stars.
Use this if you are an Apple developer looking to embed large language models directly into your macOS, iOS, or other Apple OS apps, allowing them to run AI inference locally.
Not ideal if you are not an Apple developer or if you want to integrate with cloud-based LLM APIs rather than running models locally on device.
Stars
829
Forks
111
Language
C++
License
MIT
Category
Last pushed
Dec 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/eastriverlee/LLM.swift"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)