emapco/rk-transformers

Export and Run Hugging Face Transformers Models on Rockchip NPUs

41
/ 100
Emerging

This tool helps developers deploy and run advanced language models, like those from Hugging Face, on Rockchip-powered edge devices with significantly faster performance. It takes a pre-trained language model and converts it into an optimized format, which can then be used on devices like the RK3588 or RK3576 to analyze text quickly. This is for software engineers and machine learning engineers who need to embed AI capabilities into low-power, embedded systems.

Available on PyPI.

Use this if you are developing AI applications for embedded systems and need to accelerate the performance of transformer models on Rockchip NPUs.

Not ideal if you are working with large-scale cloud-based AI deployments or do not use Rockchip hardware for your edge devices.

embedded-AI edge-computing natural-language-processing machine-learning-deployment hardware-acceleration
Maintenance 6 / 25
Adoption 6 / 25
Maturity 22 / 25
Community 7 / 25

How are scores calculated?

Stars

24

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Dec 02, 2025

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/emapco/rk-transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.