AliaksandrSiarohin/first-order-model

This repository contains the source code for the paper First Order Motion Model for Image Animation

51
/ 100
Established

This project lets you bring static images to life by transferring motion from a video onto them. You provide a driving video (e.g., someone speaking or dancing) and a source image (e.g., a portrait or a drawing), and it generates a new video where the subject of your image performs the actions from the driving video. This is ideal for artists, content creators, or anyone looking to create dynamic visual content from still images.

15,007 stars. No commits in the last 6 months.

Use this if you want to animate a still image using motion captured in a separate video, such as making a portrait appear to speak or a cartoon character to dance.

Not ideal if you need to generate entirely new, photorealistic video content without a reference driving video, or if you require fine-grained control over every aspect of the animation.

video-production digital-art content-creation visual-effects animation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

15,007

Forks

3,278

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/AliaksandrSiarohin/first-order-model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.