AIoT-MLSys-Lab/SVD-LLM
[ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2
This project helps machine learning engineers and researchers reduce the size of large language models (LLMs) like LLaMA and Mistral. It takes an existing LLM and outputs a significantly smaller, compressed version that retains strong performance. This is for professionals building and deploying LLMs who need to optimize their models for efficiency and resource constraints.
284 stars. No commits in the last 6 months.
Use this if you are an AI/ML engineer or researcher working with large language models and need to reduce their memory footprint or improve inference speed without sacrificing too much performance.
Not ideal if you are looking for a simple, no-code solution to apply LLMs, or if you do not have a strong understanding of model compression techniques and fine-tuning.
Stars
284
Forks
42
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AIoT-MLSys-Lab/SVD-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZHZisZZ/dllm
dLLM: Simple Diffusion Language Modeling
pengzhangzhi/Open-dLLM
Open diffusion language model for code generation — releasing pretraining, evaluation,...
EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM...
THUDM/LongWriter
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
datamllab/LongLM
[ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning