whwu95/FreeVA

FreeVA: Offline MLLM as Training-Free Video Assistant

27
/ 100
Experimental

This project helps researchers and machine learning practitioners evaluate the performance of multimodal large language models (MLLMs) on video question-answering tasks without the need for extensive training. It takes existing image-based MLLMs and video datasets as input, and outputs performance metrics related to zero-shot video QA and text generation, allowing for direct comparison of different models. Scientists and ML researchers focused on video understanding would use this.

No commits in the last 6 months.

Use this if you want to quickly benchmark how well existing image-based MLLMs can understand and answer questions about videos without any additional training, providing a strong baseline for your research.

Not ideal if you are looking to fine-tune a video-specific MLLM or want a tool for general video content creation or analysis outside of research benchmarking.

video-understanding machine-learning-research model-evaluation multimodal-AI language-models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

69

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Jun 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/whwu95/FreeVA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.