Eric-mingjie/rethinking-network-pruning
Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)
This project helps machine learning researchers and practitioners develop more efficient deep learning models. It takes various network pruning methods and trained ImageNet models as input, and provides insights and code to show that training a pruned model from scratch can often achieve better accuracy than traditional fine-tuning. The primary users are deep learning researchers focused on model compression and efficiency.
1,516 stars. No commits in the last 6 months.
Use this if you are exploring methods to make your deep neural networks smaller and faster without sacrificing accuracy, specifically in the context of network pruning.
Not ideal if you are looking for a plug-and-play solution for model compression without engaging in research or understanding the underlying implications of pruning strategies.
Stars
1,516
Forks
291
Language
Python
License
MIT
Category
Last pushed
Jun 07, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Eric-mingjie/rethinking-network-pruning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...