hls4ml and rule4ml

hls4ml compiles ML models to FPGA hardware designs via HLS, while rule4ml estimates the resource and latency costs of those deployments, making them complements that address consecutive stages of the FPGA ML design flow.

hls4ml
68
Established
rule4ml
37
Emerging
Maintenance 17/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 10/25
Adoption 6/25
Maturity 16/25
Community 5/25
Stars: 1,849
Forks: 530
Downloads:
Commits (30d): 9
Language: Python
License: Apache-2.0
Stars: 18
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
No Package No Dependents
No Package No Dependents

About hls4ml

fastmachinelearning/hls4ml

Machine learning on FPGAs using HLS

This project helps domain experts in fields like high-energy physics, quantum computing, or aerospace who need to process real-time data with extremely low latency. It takes machine learning models built with common frameworks and converts them into specialized firmware for FPGAs. The output is a highly optimized hardware implementation of your model, enabling rapid decision-making directly on hardware.

real-time control systems high-energy physics biomedical signal processing quantum computing satellite operations

About rule4ml

IMPETUS-UdeS/rule4ml

Resource Utilization and Latency Estimation for ML on FPGA.

This tool helps hardware design engineers and embedded AI developers estimate the FPGA resources (like BRAM, DSP, FF, LUT) and inference latency required for their machine learning models. You provide a machine learning model (e.g., a Keras model), and it outputs a detailed table showing expected resource usage and prediction speeds across different FPGA boards and configurations. This allows you to quickly compare design choices without needing to perform time-consuming hardware synthesis.

FPGA development embedded AI hardware acceleration machine learning deployment resource estimation

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work