IMPETUS-UdeS/rule4ml

Resource Utilization and Latency Estimation for ML on FPGA.

37
/ 100
Emerging

This tool helps hardware design engineers and embedded AI developers estimate the FPGA resources (like BRAM, DSP, FF, LUT) and inference latency required for their machine learning models. You provide a machine learning model (e.g., a Keras model), and it outputs a detailed table showing expected resource usage and prediction speeds across different FPGA boards and configurations. This allows you to quickly compare design choices without needing to perform time-consuming hardware synthesis.

Use this if you need to quickly evaluate different machine learning model architectures or FPGA hardware settings and predict their performance and resource footprint before committing to lengthy synthesis and deployment.

Not ideal if you need precise, post-synthesis validation of your FPGA design, as this tool provides pre-synthesis estimations.

FPGA development embedded AI hardware acceleration machine learning deployment resource estimation
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

18

Forks

1

Language

Python

License

GPL-3.0

Last pushed

Feb 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IMPETUS-UdeS/rule4ml"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.