kendryte/nncase

Open deep learning compiler stack for Kendryte AI accelerators ✨

61
/ 100
Established

This tool helps embedded AI engineers and system designers deploy pre-trained neural network models onto Kendryte AI accelerator hardware. It takes models from common frameworks like TFLite, Caffe, or ONNX and converts them into an optimized format for faster, more efficient execution on Kendryte chips, such as the K230. The output is a highly optimized model ready for integration into edge devices.

864 stars.

Use this if you need to convert your deep learning models to run efficiently on Kendryte AI accelerator hardware, ensuring high performance for embedded vision or AI applications.

Not ideal if you are working with AI accelerators from other manufacturers or if you need a general-purpose deep learning framework for training models.

embedded-ai edge-computing machine-vision model-deployment hardware-acceleration
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

864

Forks

206

Language

C#

License

Apache-2.0

Last pushed

Mar 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kendryte/nncase"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.