LightCompress and Awesome-Efficient-LLM
A compression toolkit and a curated reference list are ecosystem siblings—one provides practical implementation tools for model compression techniques while the other catalogs and organizes the broader landscape of efficient LLM approaches.
About LightCompress
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.
This toolkit helps organizations make their large AI models, like those for generating text, images, or video, run more efficiently and use less memory. It takes your existing large AI model and outputs a smaller, faster version that maintains its original performance. This is for AI developers and MLOps engineers who need to deploy these large models more cost-effectively on various hardware.
About Awesome-Efficient-LLM
horseee/Awesome-Efficient-LLM
A curated list for Efficient Large Language Models
This is a curated list of research papers and projects focused on making Large Language Models (LLMs) run more efficiently. It provides a comprehensive collection of resources on topics like pruning, quantization, and efficient training, helping researchers and engineers find solutions to optimize LLMs for speed and resource use. You'll find links to papers and their code repositories, categorized by technical approach.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work