nihaomiao/CVPR23_LFDM

The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"

41
/ 100
Emerging

This project helps researchers and creators generate short videos of human motion, gestures, or actions from a single starting image. You provide an image of a person and it produces a video showing them performing a new action. This tool is for academics in computer vision, animation specialists, or content creators who need to generate dynamic visual sequences.

473 stars. No commits in the last 6 months.

Use this if you need to create realistic videos of human motion from static images, especially for research in human activity understanding or synthetic media.

Not ideal if you need to generate videos from text descriptions or require very long, complex video sequences beyond short human actions.

human-motion-synthesis video-generation computer-vision synthetic-media animation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

473

Forks

42

Language

Python

License

BSD-2-Clause

Last pushed

Jun 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nihaomiao/CVPR23_LFDM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.