TIGER-AI-Lab/Vamba
Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]
This project helps AI researchers and machine learning engineers process and understand very long videos, even those that are an hour long. It takes an input video file and a text query, like "Describe the magic trick," and outputs a textual description or answer based on the video's content. This is for professionals building or evaluating advanced video analysis systems.
101 stars. No commits in the last 6 months.
Use this if you are a researcher or engineer working on advanced video understanding models and need to efficiently analyze or describe the content of extended video footage.
Not ideal if you are looking for an off-the-shelf application for simple video editing or personal video organization.
Stars
101
Forks
11
Language
Python
License
MIT
Category
Last pushed
Jul 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TIGER-AI-Lab/Vamba"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice