google-research/rigl

End-to-end training of sparse deep neural networks with little-to-no performance loss.

45
/ 100
Emerging

This project helps machine learning engineers and researchers optimize deep neural networks for deployment by making them sparse. It takes trained or untrained neural network models and prunes them, reducing their size and computational demands. The output is a smaller, more efficient model that maintains comparable performance to the original dense model.

335 stars. No commits in the last 6 months.

Use this if you need to deploy large neural networks on resource-constrained devices or reduce inference costs without significantly sacrificing accuracy.

Not ideal if your primary goal is to maximize raw model accuracy above all else, regardless of computational efficiency or model size.

deep-learning model-optimization edge-ai computer-vision resource-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

335

Forks

48

Language

Python

License

Apache-2.0

Last pushed

Jan 26, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/google-research/rigl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.