Aloereed/llama.cpp-server-ohos
Llama.cpp server for OpenHarmony
This project helps OpenHarmony developers integrate AI inference capabilities directly into their applications on HarmonyOS devices. It takes Llama models as input and provides a local, high-performance inference engine, allowing applications to perform AI tasks efficiently on-device. This is ideal for developers building AI-powered features for HarmonyOS phones, tablets, or IoT devices.
No commits in the last 6 months.
Use this if you are an OpenHarmony developer who needs to run AI models on-device for your applications, leveraging the native hardware and distributed capabilities of HarmonyOS.
Not ideal if you are looking for a cloud-based AI inference solution or do not develop for the OpenHarmony ecosystem.
Stars
9
Forks
4
Language
C++
License
MIT
Category
Last pushed
Jan 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Aloereed/llama.cpp-server-ohos"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.