apache/tvm-vta
Open, Modular, Deep Learning Accelerator
This project helps embedded systems engineers and researchers efficiently deploy deep learning models onto specialized hardware. It takes existing neural network models from common frameworks and generates optimized code, which can then be run on FPGAs or simulated on a workstation. The end user is typically a hardware designer, embedded AI engineer, or academic researcher working on custom AI accelerators.
333 stars. No commits in the last 6 months.
Use this if you need to optimize and deploy deep learning models onto custom FPGA hardware, or prototype new hardware-software co-designs for AI acceleration.
Not ideal if you are looking to train deep learning models or deploy them onto standard CPUs/GPUs without custom hardware acceleration needs.
Stars
333
Forks
89
Language
Scala
License
Apache-2.0
Category
Last pushed
Apr 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/apache/tvm-vta"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.