AIoT-MLSys-Lab/SVD-LLM

[ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2

46
/ 100
Emerging

This project helps machine learning engineers and researchers reduce the size of large language models (LLMs) like LLaMA and Mistral. It takes an existing LLM and outputs a significantly smaller, compressed version that retains strong performance. This is for professionals building and deploying LLMs who need to optimize their models for efficiency and resource constraints.

284 stars. No commits in the last 6 months.

Use this if you are an AI/ML engineer or researcher working with large language models and need to reduce their memory footprint or improve inference speed without sacrificing too much performance.

Not ideal if you are looking for a simple, no-code solution to apply LLMs, or if you do not have a strong understanding of model compression techniques and fine-tuning.

large-language-models model-compression resource-optimization machine-learning-engineering deep-learning-deployment
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

284

Forks

42

Language

Python

License

Apache-2.0

Last pushed

Aug 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AIoT-MLSys-Lab/SVD-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.