kkahatapitiya/LangRepo
Code for our ACL 2025 paper "Language Repository for Long Video Understanding"
This tool helps process very long videos, like those from bodycams or surveillance footage, to extract key events and answer questions. It takes pre-extracted text descriptions (captions) from video segments and condenses them into a structured, all-text repository. This repository then allows you to answer complex questions about the video's contents or even identify when specific events occurred. It's designed for researchers working with video understanding, especially in AI and machine learning.
No commits in the last 6 months.
Use this if you need to analyze exceptionally long videos and overcome the limitations of current large language models in retaining information over extended durations.
Not ideal if you're looking for a user-friendly application for casual video analysis or if you don't have programming experience.
Stars
36
Forks
4
Language
Python
License
MIT
Category
Last pushed
Jun 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kkahatapitiya/LangRepo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice