DAMO-NLP-SG/Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Video-LLaMA helps you understand the content of videos and images by answering your questions about them. You input a video or an image, and the model provides detailed text descriptions or answers based on both the visual and auditory information present. This is ideal for content analysts, researchers, or anyone needing to quickly extract insights from multimedia.
3,134 stars. No commits in the last 6 months.
Use this if you need to gain deep insights from video and image content by asking natural language questions about what you see and hear.
Not ideal if you primarily need to process text-only data or require real-time, ultra-low-latency responses for live video streams.
Stars
3,134
Forks
285
Language
Python
License
BSD-3-Clause
Category
Last pushed
Jun 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DAMO-NLP-SG/Video-LLaMA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies