VeriSilicon/TIM-VX

VeriSilicon Tensor Interface Module

59
/ 100
Established

TIM-VX helps embedded systems engineers and AI solution integrators deploy neural networks efficiently onto VeriSilicon machine learning accelerators. It takes models built with popular frameworks like TensorFlow-Lite or ONNX and optimizes them for VeriSilicon hardware, resulting in faster and more efficient AI inference on embedded devices. This is ideal for developers creating AI-powered features for edge devices.

250 stars.

Use this if you are developing AI applications for embedded systems and need to optimize neural network models to run on VeriSilicon NPU hardware.

Not ideal if you are developing general-purpose AI models for cloud platforms or if your target hardware does not use VeriSilicon ML accelerators.

embedded-AI edge-AI-deployment neural-network-optimization machine-learning-inference hardware-acceleration
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

250

Forks

87

Language

C

License

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VeriSilicon/TIM-VX"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.