TIGER-AI-Lab/VideoGenHub
A one-stop library to standardize the inference and evaluation of all the conditional video generation models.
This is a developer-focused library for working with conditional video generation models. It helps machine learning engineers and researchers standardize how they generate and evaluate videos from text descriptions or still images. You provide either text prompts or source images, and it outputs generated video content, ensuring a consistent approach across different models.
No commits in the last 6 months.
Use this if you are an ML engineer or researcher who needs a unified way to experiment with, compare, and benchmark various text-to-video or image-to-video generation models.
Not ideal if you are an end-user looking for a ready-to-use application to create videos without writing code or interacting with machine learning models directly.
Stars
51
Forks
7
Language
Python
License
MIT
Category
Last pushed
Feb 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/TIGER-AI-Lab/VideoGenHub"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmagic
OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄:...
jdh-algo/JoyVASA
Diffusion-based Portrait and Animal Animation
haidog-yaqub/EzAudio
High-quality Text-to-Audio Generation with Efficient Diffusion Transformer
404-Repo/404-gen-blender-add-on
Blender add-on for 404-GEN 3D generator running on Bittensor
linzhiqiu/t2v_metrics
Evaluating text-to-image/video/3D models with VQAScore