wudingjian/rkllm_chat

将LLM 模型部署到 Rockchip Rk3588芯片中,在开发板上使用NPU进行推理

35
/ 100
Emerging

This project helps embedded systems developers or hobbyists deploy large language models (LLMs) like Qwen or TinyLLAMA directly onto Rockchip RK3588 development boards. It takes pre-trained LLM models as input and produces an optimized, executable version that runs efficiently using the board's Neural Processing Unit (NPU). This enables local, on-device AI chat functionalities without needing cloud services.

No commits in the last 6 months.

Use this if you are a hardware enthusiast or developer looking to run popular LLMs offline and directly on your Rockchip RK3588 development board for local AI applications.

Not ideal if you are looking for a cloud-based LLM solution, a general-purpose LLM development kit for various hardware, or if you don't have experience with embedded Linux and Docker.

embedded-AI edge-computing local-LLM-deployment Rockchip-development NPU-acceleration
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

72

Forks

14

Language

Python

License

Last pushed

Oct 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/wudingjian/rkllm_chat"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.