nihaomiao/CVPR23_LFDM
The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"
This project helps researchers and creators generate short videos of human motion, gestures, or actions from a single starting image. You provide an image of a person and it produces a video showing them performing a new action. This tool is for academics in computer vision, animation specialists, or content creators who need to generate dynamic visual sequences.
473 stars. No commits in the last 6 months.
Use this if you need to create realistic videos of human motion from static images, especially for research in human activity understanding or synthetic media.
Not ideal if you need to generate videos from text descriptions or require very long, complex video sequences beyond short human actions.
Stars
473
Forks
42
Language
Python
License
BSD-2-Clause
Category
Last pushed
Jun 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nihaomiao/CVPR23_LFDM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators