Intelligent-Microsystems-Lab/SNNQuantPrune
Code for the ISCAS23 paper "The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks"
This project helps researchers and engineers working with Spiking Neural Networks (SNNs) understand how different quantization and pruning techniques impact hardware performance. It takes SNN model configurations and applies various optimization strategies, outputting insights into the hardware efficiency of these models. This is for professionals in neuromorphic computing or AI hardware design.
No commits in the last 6 months.
Use this if you are designing or optimizing Spiking Neural Networks for hardware deployment and need to evaluate the trade-offs of model compression techniques.
Not ideal if you are solely focused on the software development or application of SNNs without considering their hardware implementation details.
Stars
11
Forks
4
Language
Python
License
—
Category
Last pushed
Apr 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Intelligent-Microsystems-Lab/SNNQuantPrune"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...