Gabriele-bot/ALVEO-PYNQ_ML

Neural network inferences on Alveo cards with hls4ml framework

12
/ 100
Experimental

This project helps machine learning engineers and researchers accelerate neural network inferences. It takes a pre-trained Keras or Qkeras neural network model and converts it into a highly optimized format that can run on Xilinx Alveo FPGA boards. The output is a faster, more efficient neural network inference engine for specialized hardware.

No commits in the last 6 months.

Use this if you need to deploy neural networks with extremely low latency and high throughput on dedicated FPGA hardware, significantly outperforming CPUs and GPUs for inference.

Not ideal if you are working with standard CPU/GPU environments or if you are not familiar with FPGA development workflows and Xilinx tools.

FPGA acceleration low-latency inference machine learning deployment hardware acceleration edge AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Ada

License

Last pushed

Jul 28, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Gabriele-bot/ALVEO-PYNQ_ML"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.