xuyang-liu16/GlobalCom2

[AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models

36
/ 100
Emerging

This project helps machine learning engineers and researchers accelerate the inference speed of Large Vision-Language Models (LVLMs) when working with high-resolution images. It takes high-resolution images as input and produces faster insights from LVLMs by intelligently compressing visual information. This allows practitioners to deploy and use powerful models more efficiently.

Use this if you are working with Large Vision-Language Models (LVLMs) and need to significantly speed up their processing of high-resolution images without losing critical information.

Not ideal if you are working with standard resolution images or do not require specialized acceleration for LVLMs.

Machine Learning Inference Vision-Language Models Image Processing Model Optimization Deep Learning Deployment
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

39

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Jan 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xuyang-liu16/GlobalCom2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.