boheumd/MA-LMM

(2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding

40
/ 100
Emerging

This project helps researchers and developers working with video data to analyze long-form video content more effectively. It takes raw video files and text questions or prompts as input, and outputs classifications, answers to questions, or captions related to the video's content. It's designed for computer vision and AI researchers focusing on video understanding tasks.

347 stars. No commits in the last 6 months.

Use this if you are a researcher or AI developer focused on understanding complex, long-duration video content for tasks like video classification, question answering, or captioning.

Not ideal if you need a pre-built, ready-to-use application for everyday video analysis without diving into model architecture and training.

video-analysis multimodal-ai computer-vision video-captioning visual-question-answering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

347

Forks

30

Language

Python

License

MIT

Last pushed

Jul 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/boheumd/MA-LMM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.