QiaozheZhang/Global-One-shot-Pruning

An official implementation of the paper "How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint".

32
/ 100
Emerging

This project helps machine learning engineers or researchers simplify large neural networks without losing much performance. It takes an existing deep neural network, along with training data, and produces a significantly smaller, 'pruned' version of the network. This makes the model more efficient for deployment or further research.

No commits in the last 6 months.

Use this if you need to reduce the computational complexity and memory footprint of large deep learning models like AlexNet or ResNet without retraining from scratch.

Not ideal if you are working with smaller, custom network architectures or if you prefer to build sparse models from the ground up.

deep-learning-optimization neural-network-pruning model-compression machine-learning-research AI-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

29

Forks

3

Language

Python

License

GPL-2.0

Last pushed

Nov 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/QiaozheZhang/Global-One-shot-Pruning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.