foolwood/pytorch-slimming
Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.
This tool helps machine learning engineers and researchers make their deep learning models smaller and faster. It takes a pre-trained convolutional neural network and reduces its size by identifying and removing less important parts of the network. The output is a more efficient model that maintains high accuracy, suitable for deployment in resource-constrained environments.
577 stars. No commits in the last 6 months.
Use this if you need to deploy a convolutional neural network on devices with limited memory or processing power, such as mobile phones or embedded systems, without significantly sacrificing accuracy.
Not ideal if your primary goal is to improve model accuracy rather than reduce model size, or if you are working with non-convolutional network architectures.
Stars
577
Forks
97
Language
Python
License
MIT
Category
Last pushed
May 13, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/foolwood/pytorch-slimming"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...