mbzuai-oryx/Video-LLaVA
PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models
This tool helps you understand what's happening in videos by letting you ask questions in plain language and pointing to specific objects. You provide a video, and it answers your questions, highlighting or 'grounding' the objects you mention. This is useful for anyone who needs to quickly extract detailed information from video content, such as researchers analyzing human behavior or media analysts reviewing news footage.
262 stars. No commits in the last 6 months.
Use this if you need to precisely locate and understand objects or events in a video based on conversational prompts, especially when audio context is important.
Not ideal if your primary need is simple video transcription or if you require real-time, ultra-low-latency object detection for live streams.
Stars
262
Forks
12
Language
Python
License
—
Category
Last pushed
Aug 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mbzuai-oryx/Video-LLaVA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice