sihyun-yu/PVDM

[CVPR'23] Video Probabilistic Diffusion Models in Projected Latent Space

36
/ 100
Emerging

This project helps researchers and engineers create new video content from existing footage. You input a collection of videos, and the system learns their patterns to generate novel, realistic video clips that match the style and content of the originals. This is ideal for those working in video synthesis, content generation, or animation research.

324 stars. No commits in the last 6 months.

Use this if you need to generate high-quality, realistic video sequences by learning from a dataset of existing videos.

Not ideal if you're looking for text-to-video generation out-of-the-box or require a system that works on highly complex, unconstrained datasets without significant custom training.

video-synthesis generative-ai content-creation computer-vision deep-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

324

Forks

16

Language

Python

License

MIT

Last pushed

May 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sihyun-yu/PVDM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.