JingyangXiang/DFRot
[COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.com/p/12186430182
This project offers an improved method for quantizing large language models (LLMs) to use less memory and compute while maintaining performance. It takes a pre-trained LLM and outputs a more efficient, quantized version. The primary users are machine learning engineers or researchers working on deploying LLMs, especially on resource-constrained hardware.
No commits in the last 6 months.
Use this if you are developing or deploying large language models and need to reduce their memory footprint and computational requirements without significantly sacrificing accuracy.
Not ideal if you are working with smaller models that don't face severe memory or computational constraints, or if you require maximum possible model accuracy at any cost.
Stars
29
Forks
2
Language
Python
License
—
Category
Last pushed
Mar 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JingyangXiang/DFRot"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.