srgtuszy/llama-cpp-swift
Swift bindings for llama-cpp library
This project allows Swift developers to integrate large language models (LLMs) directly into their macOS or Linux applications. It takes compatible LLM models and processes text prompts on the user's device, generating text responses. This is for Swift application developers who want to add local AI capabilities without relying on cloud services.
No commits in the last 6 months.
Use this if you are a Swift developer building applications for macOS or Linux and need to run a large language model locally on the user's device.
Not ideal if you are not a Swift developer or if you need to integrate LLMs into a web service or a different programming environment.
Stars
67
Forks
24
Language
Swift
License
MIT
Category
Last pushed
Dec 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/srgtuszy/llama-cpp-swift"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
beehive-lab/GPULlama3.java
GPU-accelerated Llama3.java inference in pure Java using TornadoVM.
gitkaz/mlx_gguf_server
This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously...
JackZeng0208/llama.cpp-android-tutorial
llama.cpp tutorial on Android phone
awinml/llama-cpp-python-bindings
Run fast LLM Inference using Llama.cpp in Python
RhinoDevel/mt_llm
Pure C wrapper library to use llama.cpp with Linux and Windows as simple as possible.