VeriSilicon/TIM-VX
VeriSilicon Tensor Interface Module
TIM-VX helps embedded systems engineers and AI solution integrators deploy neural networks efficiently onto VeriSilicon machine learning accelerators. It takes models built with popular frameworks like TensorFlow-Lite or ONNX and optimizes them for VeriSilicon hardware, resulting in faster and more efficient AI inference on embedded devices. This is ideal for developers creating AI-powered features for edge devices.
250 stars.
Use this if you are developing AI applications for embedded systems and need to optimize neural network models to run on VeriSilicon NPU hardware.
Not ideal if you are developing general-purpose AI models for cloud platforms or if your target hardware does not use VeriSilicon ML accelerators.
Stars
250
Forks
87
Language
C
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VeriSilicon/TIM-VX"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.