AliaksandrSiarohin/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
This project lets you bring static images to life by transferring motion from a video onto them. You provide a driving video (e.g., someone speaking or dancing) and a source image (e.g., a portrait or a drawing), and it generates a new video where the subject of your image performs the actions from the driving video. This is ideal for artists, content creators, or anyone looking to create dynamic visual content from still images.
15,007 stars. No commits in the last 6 months.
Use this if you want to animate a still image using motion captured in a separate video, such as making a portrait appear to speak or a cartoon character to dance.
Not ideal if you need to generate entirely new, photorealistic video content without a reference driving video, or if you require fine-grained control over every aspect of the animation.
Stars
15,007
Forks
3,278
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/AliaksandrSiarohin/first-order-model"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
kenziyuliu/MS-G3D
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for...
yoyo-nb/Thin-Plate-Spline-Motion-Model
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
sergeytulyakov/mocogan
MoCoGAN: Decomposing Motion and Content for Video Generation
DK-Jang/motion_puzzle
Motion Puzzle - Official PyTorch implementation
paulstarke/PhaseBetweener
Creating animation sequences between sparse key frames using motion phase features.