engindeniz/vitis
[ICCV 2023 CLVL Workshop] Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts
This project helps answer questions about video content, even if it hasn't been specifically trained for that exact type of question. You provide a video and a text question, and it gives you a text answer. This is useful for researchers and data scientists who need to quickly get insights from large video datasets or build new video-understanding applications.
No commits in the last 6 months.
Use this if you need to perform video question answering with very little or no specific training data for your particular questions or video types.
Not ideal if you are looking for an out-of-the-box application rather than a research framework that requires setup and data processing.
Stars
14
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/engindeniz/vitis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model