zssloth/Embedded-Neural-Network
collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning
This is a curated collection of research papers focused on making deep learning models smaller and more efficient, or on designing specialized hardware to run them faster. It gathers various techniques like model compression and optimized hardware architectures. Machine learning engineers, researchers, and hardware architects who develop or deploy deep neural networks would find this useful for staying updated on methods to reduce model size and improve performance.
568 stars. No commits in the last 6 months.
Use this if you need to understand or implement techniques for making deep neural networks run more efficiently on resource-constrained devices or specialized hardware.
Not ideal if you are looking for ready-to-use code implementations or a general introduction to machine learning concepts.
Stars
568
Forks
120
Language
—
License
—
Category
Last pushed
Feb 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zssloth/Embedded-Neural-Network"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks