trevorpogue/algebraic-nnhw
Algebraic enhancements for GEMM & AI accelerators
This project provides advanced hardware designs for deep learning accelerators, helping engineers create more efficient AI chips. It takes mathematical algorithms for matrix multiplication and translates them into specialized systolic array architectures. The output is a hardware design that can perform deep learning inference significantly faster or with fewer physical resources. It's designed for hardware architects and ASIC/FPGA engineers building AI accelerators.
290 stars. No commits in the last 6 months.
Use this if you are designing custom hardware for deep learning inference and need to improve the performance, area, or power efficiency of your matrix multiplication units beyond conventional limits.
Not ideal if you are a software developer looking for a library to speed up deep learning on existing general-purpose hardware like GPUs or CPUs.
Stars
290
Forks
18
Language
Python
License
—
Category
Last pushed
Feb 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/trevorpogue/algebraic-nnhw"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.