VeriSilicon/tflite-vx-delegate

Tensorflow Lite external delegate based on TIM-VX

54
/ 100
Established

This project helps embedded systems engineers and firmware developers deploy TensorFlow Lite machine learning models onto VeriSilicon NPU (Neural Processing Unit) hardware. It takes an existing TensorFlow Lite model and optimizes its execution on VeriSilicon's NPU, resulting in faster and more efficient AI inference for their edge devices. It is for those who integrate AI models into hardware with VeriSilicon NPUs.

Use this if you need to accelerate TensorFlow Lite model inference on VeriSilicon NPU hardware.

Not ideal if you are not working with VeriSilicon NPU hardware or TensorFlow Lite models.

embedded-AI edge-AI NPU-acceleration device-firmware machine-learning-deployment
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

48

Forks

24

Language

C++

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VeriSilicon/tflite-vx-delegate"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.