boheumd/MA-LMM
(2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding
This project helps researchers and developers working with video data to analyze long-form video content more effectively. It takes raw video files and text questions or prompts as input, and outputs classifications, answers to questions, or captions related to the video's content. It's designed for computer vision and AI researchers focusing on video understanding tasks.
347 stars. No commits in the last 6 months.
Use this if you are a researcher or AI developer focused on understanding complex, long-duration video content for tasks like video classification, question answering, or captioning.
Not ideal if you need a pre-built, ready-to-use application for everyday video analysis without diving into model architecture and training.
Stars
347
Forks
30
Language
Python
License
MIT
Category
Last pushed
Jul 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/boheumd/MA-LMM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice