jeshraghian/QSNNs
Quantization-aware training with spiking neural networks
This project helps machine learning researchers and engineers develop more efficient artificial neural networks. Specifically, it provides tools and methods for training spiking neural networks (SNNs) that are 'quantized,' meaning they use less memory and computational power. Researchers can input their SNN models and training data to output optimized, hardware-friendly models.
No commits in the last 6 months.
Use this if you are working on developing efficient, low-power AI systems, especially for edge devices, and need to optimize spiking neural networks.
Not ideal if you are a beginner in machine learning or not working with spiking neural networks and quantization-aware training.
Stars
53
Forks
7
Language
Python
License
—
Category
Last pushed
Feb 18, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jeshraghian/QSNNs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...