jainamnahar14surat/Deep-Learning-Accelerator-Transformer-FPGA

FPGA-based hardware accelerator for Transformer neural networks enabling efficient deep learning inference on edge devices.

30
/ 100
Emerging

This project helps hardware engineers and deep learning specialists accelerate Transformer neural network inference. It takes a Transformer model architecture and implements it directly onto an FPGA, yielding highly optimized and parallelized hardware for real-time deep learning applications. Users are typically designing or implementing AI systems that require low-latency, energy-efficient processing at the edge.

Use this if you need to deploy deep learning Transformer models on edge devices with strict real-time, low-latency, or energy-efficiency requirements.

Not ideal if you are a software developer looking for a high-level deep learning framework or library.

edge-ai hardware-acceleration real-time-inference fpga-development embedded-systems
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

5

Forks

Language

Verilog

License

MIT

Last pushed

Feb 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jainamnahar14surat/Deep-Learning-Accelerator-Transformer-FPGA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.