ZIB-IOL/SMS
Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"
This project provides the experimental code for a research paper on "Sparse Model Soups." It helps machine learning researchers explore techniques to improve the efficiency and performance of their models. Researchers can input pretrained neural network models and various pruning/averaging strategies, then observe the resulting sparse models' characteristics and performance metrics.
No commits in the last 6 months.
Use this if you are a machine learning researcher or practitioner interested in advanced model pruning techniques, specifically combining pruning with model averaging to create more efficient and robust neural networks.
Not ideal if you are looking for a plug-and-play solution for general model optimization or a library for immediate deployment in production environments without deep research interest.
Stars
12
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ZIB-IOL/SMS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...