TIGER-AI-Lab/VideoGenHub

A one-stop library to standardize the inference and evaluation of all the conditional video generation models.

37
/ 100
Emerging

This is a developer-focused library for working with conditional video generation models. It helps machine learning engineers and researchers standardize how they generate and evaluate videos from text descriptions or still images. You provide either text prompts or source images, and it outputs generated video content, ensuring a consistent approach across different models.

No commits in the last 6 months.

Use this if you are an ML engineer or researcher who needs a unified way to experiment with, compare, and benchmark various text-to-video or image-to-video generation models.

Not ideal if you are an end-user looking for a ready-to-use application to create videos without writing code or interacting with machine learning models directly.

video-generation generative-AI machine-learning-research model-benchmarking computer-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

51

Forks

7

Language

Python

License

MIT

Last pushed

Feb 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/TIGER-AI-Lab/VideoGenHub"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.