VITA-Group/SMC-Bench

[ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, Zhangyang Wang

24
/ 100
Experimental

This project helps machine learning researchers and practitioners evaluate the effectiveness of sparse neural networks across various complex tasks. It takes existing deep learning models and applies different sparsity algorithms to them. The output provides insights into how well these models perform with reduced complexity for tasks like commonsense reasoning, multilingual translation, or protein prediction. Researchers and engineers working with large neural networks would use this to understand the true potential of sparsity.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer interested in assessing and comparing the performance of different sparse neural network algorithms on diverse, challenging datasets.

Not ideal if you are looking for a tool to train a new sparse model from scratch or for general-purpose model training outside of benchmarking sparsity.

deep-learning neural-networks model-optimization natural-language-processing bioinformatics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

28

Forks

3

Language

Python

License

Last pushed

Aug 29, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VITA-Group/SMC-Bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.