jainamnahar14surat/Deep-Learning-Accelerator-Transformer-FPGA
FPGA-based hardware accelerator for Transformer neural networks enabling efficient deep learning inference on edge devices.
This project helps hardware engineers and deep learning specialists accelerate Transformer neural network inference. It takes a Transformer model architecture and implements it directly onto an FPGA, yielding highly optimized and parallelized hardware for real-time deep learning applications. Users are typically designing or implementing AI systems that require low-latency, energy-efficient processing at the edge.
Use this if you need to deploy deep learning Transformer models on edge devices with strict real-time, low-latency, or energy-efficiency requirements.
Not ideal if you are a software developer looking for a high-level deep learning framework or library.
Stars
5
Forks
—
Language
Verilog
License
MIT
Category
Last pushed
Feb 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jainamnahar14surat/Deep-Learning-Accelerator-Transformer-FPGA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks