HYUNJS/STTM
[ICCV 2025] Multi-Granular Spatio-Temporal Token Merging for Training-Free Acceleration of Video LLMs
This project helps make Video Large Language Models (LLMs) run much faster without needing to retrain them, which is perfect for tasks like video analysis or automated content moderation. It takes existing video data and a Video LLM, and significantly speeds up how quickly the LLM can understand and answer questions about the video content. This is designed for researchers or practitioners working with video AI who need to accelerate their video processing workflows.
Use this if you need to dramatically speed up the inference of Video LLMs on various video understanding tasks without the time and cost of retraining.
Not ideal if you are looking for a method to improve the accuracy or capabilities of Video LLMs beyond just speed.
Stars
57
Forks
2
Language
Python
License
—
Category
Last pushed
Feb 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HYUNJS/STTM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice