JingyangXiang/DFRot

[COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.com/p/12186430182

21
/ 100
Experimental

This project offers an improved method for quantizing large language models (LLMs) to use less memory and compute while maintaining performance. It takes a pre-trained LLM and outputs a more efficient, quantized version. The primary users are machine learning engineers or researchers working on deploying LLMs, especially on resource-constrained hardware.

No commits in the last 6 months.

Use this if you are developing or deploying large language models and need to reduce their memory footprint and computational requirements without significantly sacrificing accuracy.

Not ideal if you are working with smaller models that don't face severe memory or computational constraints, or if you require maximum possible model accuracy at any cost.

large-language-models model-quantization AI-inference-optimization deep-learning-deployment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

29

Forks

2

Language

Python

License

Last pushed

Mar 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JingyangXiang/DFRot"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.