NotPunchnox/rkllama

Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support ( rkllm )

57
/ 100
Established

RKLLama helps you run large language models (LLMs) and other AI models like image generators and speech-to-text on specialized Rockchip devices. It takes your input text, images, or audio and processes it using models optimized for your device's Neural Processing Unit (NPU), delivering quick AI-powered responses or content. This is for developers, tinkerers, or embedded system enthusiasts building AI applications on Rockchip RK3588(S) or RK3576 hardware.

447 stars.

Use this if you need to deploy and manage AI models on Rockchip-powered single-board computers, leveraging their NPU for faster and more efficient inference.

Not ideal if you're looking for a cloud-based AI solution or if you don't have specific Rockchip RK3588(S) or RK3576 hardware.

edge-ai embedded-systems ai-inference on-device-ai llm-deployment
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

447

Forks

71

Language

Python

License

GPL-3.0

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/NotPunchnox/rkllama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.