VeriSilicon/tflite-vx-delegate
Tensorflow Lite external delegate based on TIM-VX
This project helps embedded systems engineers and firmware developers deploy TensorFlow Lite machine learning models onto VeriSilicon NPU (Neural Processing Unit) hardware. It takes an existing TensorFlow Lite model and optimizes its execution on VeriSilicon's NPU, resulting in faster and more efficient AI inference for their edge devices. It is for those who integrate AI models into hardware with VeriSilicon NPUs.
Use this if you need to accelerate TensorFlow Lite model inference on VeriSilicon NPU hardware.
Not ideal if you are not working with VeriSilicon NPU hardware or TensorFlow Lite models.
Stars
48
Forks
24
Language
C++
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VeriSilicon/tflite-vx-delegate"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX