ryuz/BinaryBrain

Binary Neural Network Framework for FPGA(Differentiable LUT)

52
/ 100
Established

This project helps embedded systems engineers and researchers deploy highly efficient deep learning models directly onto FPGAs. It takes a description of your neural network and outputs Verilog code, which can then be used to configure an FPGA for tasks like real-time image recognition. The core innovation is directly training the FPGA's Look-up Table (LUT) elements for extremely low-latency, low-resource inference.

172 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to run deep learning inference with minimal hardware resources and ultra-low latency on an FPGA, especially for real-time applications like embedded vision systems.

Not ideal if you are looking to train and deploy complex, high-precision deep learning models on general-purpose GPUs or CPUs without specific FPGA hardware constraints.

FPGA development embedded AI hardware acceleration real-time inference deep learning deployment
Stale 6m No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

172

Forks

22

Language

C++

License

MIT

Last pushed

Aug 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ryuz/BinaryBrain"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.