arcee-ai/PruneMe
Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models
This project helps machine learning engineers and researchers reduce the computational cost of large language models (LLMs). By analyzing layer similarities in your LLM using a dataset, it identifies and removes redundant layers. The output is a smaller, more efficient LLM that performs nearly as well as the original, allowing for faster fine-tuning and inference.
263 stars. No commits in the last 6 months.
Use this if you are an ML engineer or researcher looking to make your large language models run faster and with less memory without significantly impacting their performance.
Not ideal if you need to optimize an LLM for very specific, niche tasks where even minor performance degradation is unacceptable, or if you are not working with LLMs.
Stars
263
Forks
32
Language
Python
License
—
Category
Last pushed
Apr 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/arcee-ai/PruneMe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.