showlab/MotionDirector
[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
MotionDirector helps you create custom videos by teaching an AI model specific motions. You provide a set of short video clips showcasing a particular action (like 'a person riding a bicycle'), and the system then generates new, diverse videos featuring that exact motion, but with different subjects or settings. This is useful for animators, content creators, or marketers who need to generate consistent actions across various scenarios.
1,050 stars. No commits in the last 6 months.
Use this if you need to generate many videos featuring a specific, custom motion or action, without having to film or animate each one individually.
Not ideal if you only need static images or don't have existing video clips to define the motion you want to replicate.
Stars
1,050
Forks
60
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 21, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/showlab/MotionDirector"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators