apache/tvm-vta

Open, Modular, Deep Learning Accelerator

49
/ 100
Emerging

This project helps embedded systems engineers and researchers efficiently deploy deep learning models onto specialized hardware. It takes existing neural network models from common frameworks and generates optimized code, which can then be run on FPGAs or simulated on a workstation. The end user is typically a hardware designer, embedded AI engineer, or academic researcher working on custom AI accelerators.

333 stars. No commits in the last 6 months.

Use this if you need to optimize and deploy deep learning models onto custom FPGA hardware, or prototype new hardware-software co-designs for AI acceleration.

Not ideal if you are looking to train deep learning models or deploy them onto standard CPUs/GPUs without custom hardware acceleration needs.

deep-learning-acceleration FPGA-development embedded-AI hardware-software-co-design AI-chip-design
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

333

Forks

89

Language

Scala

License

Apache-2.0

Last pushed

Apr 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/apache/tvm-vta"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.