sergeytulyakov/mocogan
MoCoGAN: Decomposing Motion and Content for Video Generation
This helps researchers, animators, or marketing professionals generate short video clips, such as facial expressions or human actions, from random inputs. It allows for independent control over the subject's identity (content) and their movement or action (motion), producing new variations of movements for a specific character or different characters performing the same action. This is useful for anyone needing to create diverse, synthetic video data for training, creative content, or analysis.
602 stars. No commits in the last 6 months.
Use this if you need to generate diverse short video clips where you want precise control over who or what is performing an action, and the specific action they perform.
Not ideal if you need to generate long, complex video narratives or highly realistic, photorealistic footage without separate motion and content controls.
Stars
602
Forks
113
Language
Python
License
—
Category
Last pushed
Dec 17, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sergeytulyakov/mocogan"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AliaksandrSiarohin/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
kenziyuliu/MS-G3D
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for...
yoyo-nb/Thin-Plate-Spline-Motion-Model
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
DK-Jang/motion_puzzle
Motion Puzzle - Official PyTorch implementation
paulstarke/PhaseBetweener
Creating animation sequences between sparse key frames using motion phase features.