BaiTheBest/SparseLLM

Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)

38
/ 100
Emerging

This project helps machine learning researchers and engineers make Large Language Models (LLMs) like OPT and LLaMA-2 smaller and faster. By reducing the number of connections in these models, it helps them run more efficiently on hardware. You input an existing LLM and a desired sparsity level, and it outputs a more compact, pruned version of that model.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking to optimize the computational efficiency and memory footprint of large language models for deployment or experimentation.

Not ideal if you need a quick, one-shot pruning solution or if you are working with very limited GPU memory on models larger than LLaMA-2-7B without adjusting calibration data size.

large-language-models model-optimization deep-learning-research model-compression AI-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

67

Forks

10

Language

Python

License

Apache-2.0

Last pushed

Mar 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BaiTheBest/SparseLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.