hls4ml and rule4ml
hls4ml compiles ML models to FPGA hardware designs via HLS, while rule4ml estimates the resource and latency costs of those deployments, making them complements that address consecutive stages of the FPGA ML design flow.
About hls4ml
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
This project helps domain experts in fields like high-energy physics, quantum computing, or aerospace who need to process real-time data with extremely low latency. It takes machine learning models built with common frameworks and converts them into specialized firmware for FPGAs. The output is a highly optimized hardware implementation of your model, enabling rapid decision-making directly on hardware.
About rule4ml
IMPETUS-UdeS/rule4ml
Resource Utilization and Latency Estimation for ML on FPGA.
This tool helps hardware design engineers and embedded AI developers estimate the FPGA resources (like BRAM, DSP, FF, LUT) and inference latency required for their machine learning models. You provide a machine learning model (e.g., a Keras model), and it outputs a detailed table showing expected resource usage and prediction speeds across different FPGA boards and configurations. This allows you to quickly compare design choices without needing to perform time-consuming hardware synthesis.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work