Intelligent-Microsystems-Lab/SNNQuantPrune

Code for the ISCAS23 paper "The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks"

28
/ 100
Experimental

This project helps researchers and engineers working with Spiking Neural Networks (SNNs) understand how different quantization and pruning techniques impact hardware performance. It takes SNN model configurations and applies various optimization strategies, outputting insights into the hardware efficiency of these models. This is for professionals in neuromorphic computing or AI hardware design.

No commits in the last 6 months.

Use this if you are designing or optimizing Spiking Neural Networks for hardware deployment and need to evaluate the trade-offs of model compression techniques.

Not ideal if you are solely focused on the software development or application of SNNs without considering their hardware implementation details.

neuromorphic-computing AI-hardware-design neural-network-optimization edge-AI SNN-deployment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

11

Forks

4

Language

Python

License

Last pushed

Apr 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Intelligent-Microsystems-Lab/SNNQuantPrune"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.