jordddan/Pruning-LLMs

The framework to prune LLMs to any size and any config.

31
/ 100
Emerging

This framework helps machine learning practitioners efficiently reduce the size of large language models (LLMs) without significant loss of capability. It takes a pre-trained Transformer-based LLM and allows you to specify a custom, smaller configuration, outputting a compact model that is faster to run and easier to fine-tune for specific tasks. This is ideal for scientists, engineers, or product managers who need to deploy powerful LLMs with limited computational resources.

No commits in the last 6 months.

Use this if you need to create a smaller, more efficient version of an existing large language model for deployment or specialized fine-tuning, especially when working with limited computing resources.

Not ideal if you are looking for a simple, out-of-the-box solution for general LLM use without requiring custom architectural modifications or re-training.

large-language-models model-optimization AI-deployment resource-efficiency natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

95

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Mar 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jordddan/Pruning-LLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.