powerserve-project/PowerServe
High-speed and easy-use LLM serving framework for local deployment
This project helps developers integrate large language models (LLMs) into their mobile applications, allowing the LLMs to run directly on user devices rather than relying on cloud services. It takes pre-trained LLM models (like those from Hugging Face) and optimizes them for high-speed local execution on Android and HarmonyOS devices, especially those with Qualcomm NPUs. The end user is a mobile app developer who wants to embed powerful AI capabilities directly into their applications, providing fast, offline access to LLMs.
146 stars. No commits in the last 6 months.
Use this if you are a mobile app developer looking to deploy LLMs locally on Android or HarmonyOS devices for fast, on-device AI inference.
Not ideal if you need to run LLMs on cloud servers, desktop computers, or devices without Qualcomm NPUs.
Stars
146
Forks
20
Language
C++
License
Apache-2.0
Category
Last pushed
Aug 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/powerserve-project/PowerServe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...