Gabriele-bot/ALVEO-PYNQ_ML
Neural network inferences on Alveo cards with hls4ml framework
This project helps machine learning engineers and researchers accelerate neural network inferences. It takes a pre-trained Keras or Qkeras neural network model and converts it into a highly optimized format that can run on Xilinx Alveo FPGA boards. The output is a faster, more efficient neural network inference engine for specialized hardware.
No commits in the last 6 months.
Use this if you need to deploy neural networks with extremely low latency and high throughput on dedicated FPGA hardware, significantly outperforming CPUs and GPUs for inference.
Not ideal if you are working with standard CPU/GPU environments or if you are not familiar with FPGA development workflows and Xilinx tools.
Stars
8
Forks
—
Language
Ada
License
—
Category
Last pushed
Jul 28, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Gabriele-bot/ALVEO-PYNQ_ML"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks