joennlae/halutmatmul

Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator

37
/ 100
Emerging

This project offers an innovative way to speed up deep neural network computations by making matrix multiplications much more energy-efficient and faster. It takes standard neural network models as input and produces an optimized version that runs on specialized hardware, achieving high accuracy with significantly less power consumption. This is ideal for AI hardware designers and researchers focused on deploying efficient machine learning models.

216 stars. No commits in the last 6 months.

Use this if you are designing custom hardware accelerators for AI and need to dramatically improve the energy and area efficiency of deep learning inference.

Not ideal if you are looking for a software-only solution for general-purpose CPUs or GPUs without specialized hardware integration.

AI hardware design energy-efficient AI DNN acceleration custom silicon machine learning inference
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

216

Forks

14

Language

Python

License

MIT

Last pushed

Dec 10, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/joennlae/halutmatmul"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.