QiaozheZhang/Global-One-shot-Pruning
An official implementation of the paper "How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint".
This project helps machine learning engineers or researchers simplify large neural networks without losing much performance. It takes an existing deep neural network, along with training data, and produces a significantly smaller, 'pruned' version of the network. This makes the model more efficient for deployment or further research.
No commits in the last 6 months.
Use this if you need to reduce the computational complexity and memory footprint of large deep learning models like AlexNet or ResNet without retraining from scratch.
Not ideal if you are working with smaller, custom network architectures or if you prefer to build sparse models from the ground up.
Stars
29
Forks
3
Language
Python
License
GPL-2.0
Category
Last pushed
Nov 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/QiaozheZhang/Global-One-shot-Pruning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...