Aaronhuang-778/SliM-LLM

[ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models

25
/ 100
Experimental

This project helps machine learning engineers and researchers reduce the computational resources needed to run large language models (LLMs) like LLaMA and OPT. It takes a pre-trained LLM and converts it into a more efficient, mixed-precision version, allowing it to perform tasks like text generation, question answering, and reasoning with significantly less memory and faster inference on GPUs. This is ideal for those deploying or experimenting with LLMs on hardware with limited capacity.

No commits in the last 6 months.

Use this if you need to deploy or run large language models more efficiently on constrained hardware, aiming to reduce memory footprint and increase inference speed without a significant loss in accuracy.

Not ideal if you are solely focused on training new, full-precision large language models from scratch or if your current hardware already has ample resources to run LLMs without optimization.

large-language-models model-optimization deep-learning-deployment AI-inference resource-management
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

53

Forks

4

Language

Python

License

Last pushed

Aug 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Aaronhuang-778/SliM-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.