megvii-research/Sparsebit

A model compression and acceleration toolbox based on pytorch.

43
/ 100
Emerging

This toolkit helps machine learning researchers and engineers make their large neural network models smaller and faster. You provide an existing PyTorch model, and the toolkit applies compression techniques like pruning and quantization. The output is a more compact and efficient model that performs similarly but requires less computational power and memory, making it easier to deploy on resource-constrained hardware.

332 stars. No commits in the last 6 months.

Use this if you need to optimize large deep learning models for faster inference or deployment on devices with limited memory and processing power, without significantly sacrificing accuracy.

Not ideal if you are a data scientist or analyst looking for a no-code solution to optimize pre-trained models, as this tool requires familiarity with PyTorch and model development.

deep-learning-optimization model-compression neural-network-deployment resource-constrained-ai machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

332

Forks

37

Language

Python

License

Apache-2.0

Last pushed

Jan 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/megvii-research/Sparsebit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.