arcee-ai/PruneMe

Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models

34
/ 100
Emerging

This project helps machine learning engineers and researchers reduce the computational cost of large language models (LLMs). By analyzing layer similarities in your LLM using a dataset, it identifies and removes redundant layers. The output is a smaller, more efficient LLM that performs nearly as well as the original, allowing for faster fine-tuning and inference.

263 stars. No commits in the last 6 months.

Use this if you are an ML engineer or researcher looking to make your large language models run faster and with less memory without significantly impacting their performance.

Not ideal if you need to optimize an LLM for very specific, niche tasks where even minor performance degradation is unacceptable, or if you are not working with LLMs.

large-language-models model-optimization deep-learning computational-efficiency model-fine-tuning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

263

Forks

32

Language

Python

License

Last pushed

Apr 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/arcee-ai/PruneMe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.