google-research/rigl
End-to-end training of sparse deep neural networks with little-to-no performance loss.
This project helps machine learning engineers and researchers optimize deep neural networks for deployment by making them sparse. It takes trained or untrained neural network models and prunes them, reducing their size and computational demands. The output is a smaller, more efficient model that maintains comparable performance to the original dense model.
335 stars. No commits in the last 6 months.
Use this if you need to deploy large neural networks on resource-constrained devices or reduce inference costs without significantly sacrificing accuracy.
Not ideal if your primary goal is to maximize raw model accuracy above all else, regardless of computational efficiency or model size.
Stars
335
Forks
48
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 26, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/google-research/rigl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...