Aaronhuang-778/SliM-LLM
[ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
This project helps machine learning engineers and researchers reduce the computational resources needed to run large language models (LLMs) like LLaMA and OPT. It takes a pre-trained LLM and converts it into a more efficient, mixed-precision version, allowing it to perform tasks like text generation, question answering, and reasoning with significantly less memory and faster inference on GPUs. This is ideal for those deploying or experimenting with LLMs on hardware with limited capacity.
No commits in the last 6 months.
Use this if you need to deploy or run large language models more efficiently on constrained hardware, aiming to reduce memory footprint and increase inference speed without a significant loss in accuracy.
Not ideal if you are solely focused on training new, full-precision large language models from scratch or if your current hardware already has ample resources to run LLMs without optimization.
Stars
53
Forks
4
Language
Python
License
—
Category
Last pushed
Aug 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Aaronhuang-778/SliM-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.