powerserve-project/PowerServe

High-speed and easy-use LLM serving framework for local deployment

44
/ 100
Emerging

This project helps developers integrate large language models (LLMs) into their mobile applications, allowing the LLMs to run directly on user devices rather than relying on cloud services. It takes pre-trained LLM models (like those from Hugging Face) and optimizes them for high-speed local execution on Android and HarmonyOS devices, especially those with Qualcomm NPUs. The end user is a mobile app developer who wants to embed powerful AI capabilities directly into their applications, providing fast, offline access to LLMs.

146 stars. No commits in the last 6 months.

Use this if you are a mobile app developer looking to deploy LLMs locally on Android or HarmonyOS devices for fast, on-device AI inference.

Not ideal if you need to run LLMs on cloud servers, desktop computers, or devices without Qualcomm NPUs.

mobile-app-development on-device-AI LLM-deployment edge-computing Android-development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

146

Forks

20

Language

C++

License

Apache-2.0

Last pushed

Aug 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/powerserve-project/PowerServe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.