Thraetaona/Innervator
Innervator: Hardware Acceleration for Neural Networks
Innervator helps hardware engineers and embedded systems developers optimize the performance and power consumption of AI models on specialized hardware. It takes the design specifications of a neural network, including its layers, neurons, weights, and biases, and generates a custom hardware design for Field-Programmable Gate Arrays (FPGAs). This results in significantly faster predictions and lower energy use compared to running AI on general-purpose computer processors.
No commits in the last 6 months.
Use this if you need to deploy neural networks on resource-constrained devices or edge electronics, where minimizing power consumption and maximizing processing speed are critical.
Not ideal if you are a software developer focused solely on training and deploying AI models on standard CPUs or GPUs without custom hardware considerations.
Stars
18
Forks
3
Language
VHDL
License
GPL-3.0
Category
Last pushed
Aug 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Thraetaona/Innervator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks