xuyang-liu16/GlobalCom2
[AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models
This project helps machine learning engineers and researchers accelerate the inference speed of Large Vision-Language Models (LVLMs) when working with high-resolution images. It takes high-resolution images as input and produces faster insights from LVLMs by intelligently compressing visual information. This allows practitioners to deploy and use powerful models more efficiently.
Use this if you are working with Large Vision-Language Models (LVLMs) and need to significantly speed up their processing of high-resolution images without losing critical information.
Not ideal if you are working with standard resolution images or do not require specialized acceleration for LVLMs.
Stars
39
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xuyang-liu16/GlobalCom2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.