ryuz/BinaryBrain
Binary Neural Network Framework for FPGA(Differentiable LUT)
This project helps embedded systems engineers and researchers deploy highly efficient deep learning models directly onto FPGAs. It takes a description of your neural network and outputs Verilog code, which can then be used to configure an FPGA for tasks like real-time image recognition. The core innovation is directly training the FPGA's Look-up Table (LUT) elements for extremely low-latency, low-resource inference.
172 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to run deep learning inference with minimal hardware resources and ultra-low latency on an FPGA, especially for real-time applications like embedded vision systems.
Not ideal if you are looking to train and deploy complex, high-precision deep learning models on general-purpose GPUs or CPUs without specific FPGA hardware constraints.
Stars
172
Forks
22
Language
C++
License
MIT
Category
Last pushed
Aug 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ryuz/BinaryBrain"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
mlverse/torch
R Interface to Torch
modern-fortran/neural-fortran
A parallel framework for deep learning
Beliavsky/Fortran-code-on-GitHub
Directory of Fortran codes on GitHub, arranged by topic
Cambridge-ICCS/FTorch
A library for directly calling PyTorch ML models from Fortran.
NVIDIA/TorchFort
An Online Deep Learning Interface for HPC programs on NVIDIA GPUs