LightCompress and GlobalCom2
These tools are competitors, as both aim to provide toolkits for large model compression and inference acceleration, with similar research publication venues and target models.
About LightCompress
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.
This toolkit helps organizations make their large AI models, like those for generating text, images, or video, run more efficiently and use less memory. It takes your existing large AI model and outputs a smaller, faster version that maintains its original performance. This is for AI developers and MLOps engineers who need to deploy these large models more cost-effectively on various hardware.
About GlobalCom2
xuyang-liu16/GlobalCom2
[AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models
This project helps machine learning engineers and researchers accelerate the inference speed of Large Vision-Language Models (LVLMs) when working with high-resolution images. It takes high-resolution images as input and produces faster insights from LVLMs by intelligently compressing visual information. This allows practitioners to deploy and use powerful models more efficiently.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work