wgcyeo/WorldMM
[CVPR 2026] WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning
This project helps AI researchers and developers working on video understanding to build intelligent agents that can reason about long-duration videos. It takes raw video data, along with transcripts and captions, to construct detailed, multi-modal memories. The output is an agent capable of answering complex questions about events and information spread across extended video content.
Use this if you need to develop and evaluate AI models that can comprehend and answer questions about very long videos, such as those documenting daily life or extended events.
Not ideal if you are looking for a simple tool for basic video annotation or short-clip analysis without complex, long-term reasoning requirements.
Stars
61
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/wgcyeo/WorldMM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model