IMPETUS-UdeS/rule4ml
Resource Utilization and Latency Estimation for ML on FPGA.
This tool helps hardware design engineers and embedded AI developers estimate the FPGA resources (like BRAM, DSP, FF, LUT) and inference latency required for their machine learning models. You provide a machine learning model (e.g., a Keras model), and it outputs a detailed table showing expected resource usage and prediction speeds across different FPGA boards and configurations. This allows you to quickly compare design choices without needing to perform time-consuming hardware synthesis.
Use this if you need to quickly evaluate different machine learning model architectures or FPGA hardware settings and predict their performance and resource footprint before committing to lengthy synthesis and deployment.
Not ideal if you need precise, post-synthesis validation of your FPGA design, as this tool provides pre-synthesis estimations.
Stars
18
Forks
1
Language
Python
License
GPL-3.0
Category
Last pushed
Feb 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IMPETUS-UdeS/rule4ml"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks