sergeytulyakov/mocogan

MoCoGAN: Decomposing Motion and Content for Video Generation

41
/ 100
Emerging

This helps researchers, animators, or marketing professionals generate short video clips, such as facial expressions or human actions, from random inputs. It allows for independent control over the subject's identity (content) and their movement or action (motion), producing new variations of movements for a specific character or different characters performing the same action. This is useful for anyone needing to create diverse, synthetic video data for training, creative content, or analysis.

602 stars. No commits in the last 6 months.

Use this if you need to generate diverse short video clips where you want precise control over who or what is performing an action, and the specific action they perform.

Not ideal if you need to generate long, complex video narratives or highly realistic, photorealistic footage without separate motion and content controls.

video-synthesis facial-animation human-action-generation synthetic-data-generation character-animation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 23 / 25

How are scores calculated?

Stars

602

Forks

113

Language

Python

License

Last pushed

Dec 17, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sergeytulyakov/mocogan"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.