TIGER-AI-Lab/VideoScore
official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]
This tool helps video developers and researchers automatically evaluate the quality of AI-generated videos. It takes a text prompt and the generated video as input, then outputs a score across various aspects like visual quality, temporal consistency, and alignment with the text. This allows for quick, objective assessment without needing extensive human review.
113 stars.
Use this if you are developing or comparing video generation models and need an automated, reliable way to score their output against human judgment benchmarks.
Not ideal if you need a creative review or subjective artistic critique of video content, as it focuses on measurable quality metrics.
Stars
113
Forks
5
Language
Python
License
MIT
Category
Last pushed
Dec 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TIGER-AI-Lab/VideoScore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch