horseee/LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.

47
/ 100
Emerging

This project helps machine learning engineers and researchers reduce the size of large language models (LLMs) like Llama, BLOOM, and Vicuna. By taking an existing LLM as input, it prunes unnecessary components while aiming to maintain its multi-task abilities. The output is a smaller, more efficient LLM that uses less computational resources, allowing for easier deployment and faster inference.

1,109 stars. No commits in the last 6 months.

Use this if you need to deploy a large language model but are constrained by computational resources, and want to reduce its size and inference cost without significantly impacting performance.

Not ideal if you are looking for a tool to train LLMs from scratch or fine-tune them on a completely new domain without any size reduction goals.

Large Language Models Model Compression Deep Learning Deployment AI Efficiency Resource Optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

1,109

Forks

130

Language

Python

License

Apache-2.0

Last pushed

Oct 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/horseee/LLM-Pruner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.