mlvlab/Flipped-VQA
Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)
This project helps researchers and developers advance the field of video question answering. It takes a video and a natural language question about it, then outputs an accurate answer by analyzing temporal and causal relationships within the video. This is ideal for AI researchers, machine learning engineers, and data scientists working on understanding complex video content.
No commits in the last 6 months.
Use this if you are a researcher or developer aiming to improve AI models' ability to understand video content and answer complex questions that require temporal and causal reasoning.
Not ideal if you are looking for an out-of-the-box application for everyday video analysis without deep technical expertise in machine learning.
Stars
78
Forks
12
Language
Python
License
MIT
Category
Last pushed
Mar 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mlvlab/Flipped-VQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice