LightCompress and GlobalCom2

These tools are competitors, as both aim to provide toolkits for large model compression and inference acceleration, with similar research publication venues and target models.

LightCompress
64
Established
GlobalCom2
36
Emerging
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 18/25
Maintenance 10/25
Adoption 7/25
Maturity 16/25
Community 3/25
Stars: 688
Forks: 72
Downloads:
Commits (30d): 36
Language: Python
License: Apache-2.0
Stars: 39
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About LightCompress

ModelTC/LightCompress

[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.

This toolkit helps organizations make their large AI models, like those for generating text, images, or video, run more efficiently and use less memory. It takes your existing large AI model and outputs a smaller, faster version that maintains its original performance. This is for AI developers and MLOps engineers who need to deploy these large models more cost-effectively on various hardware.

AI model deployment MLOps large language models computer vision models generative AI

About GlobalCom2

xuyang-liu16/GlobalCom2

[AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models

This project helps machine learning engineers and researchers accelerate the inference speed of Large Vision-Language Models (LVLMs) when working with high-resolution images. It takes high-resolution images as input and produces faster insights from LVLMs by intelligently compressing visual information. This allows practitioners to deploy and use powerful models more efficiently.

Machine Learning Inference Vision-Language Models Image Processing Model Optimization Deep Learning Deployment

Scores updated daily from GitHub, PyPI, and npm data. How scores work