luuyin/OWL
Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"
This project helps machine learning engineers reduce the size of large language models (LLMs) like LLaMA and OPT without significantly impacting their performance. It takes an LLM as input and produces a smaller, more efficient version of the model by intelligently removing unnecessary parameters. This is ideal for ML engineers, researchers, and MLOps specialists deploying LLMs where computational resources or inference speed are critical.
No commits in the last 6 months.
Use this if you need to deploy large language models more efficiently by making them smaller while preserving their accuracy, especially at high sparsity levels.
Not ideal if you are working with vision models or require uniform pruning across all layers of your language model.
Stars
81
Forks
9
Language
Python
License
MIT
Category
Last pushed
Jul 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/luuyin/OWL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.