snap-research/articulated-animation
Code for Motion Representations for Articulated Animation paper
This project helps animators, content creators, or visual effects artists bring still images to life by transferring motion from a video onto a static image. You provide a source image of a person or character and a 'driving' video showcasing the desired motion (like someone talking or dancing). The output is a new video where the still image appears to perform the movements from the driving video.
1,276 stars. No commits in the last 6 months.
Use this if you want to animate a static image of a person or character using motion captured from a separate video.
Not ideal if you need to generate entirely new, custom motions without a source video, or if you are animating objects other than articulated figures.
Stars
1,276
Forks
352
Language
Jupyter Notebook
License
—
Category
Last pushed
Jun 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/snap-research/articulated-animation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pypose/pypose
A library for differentiable robotics on manifolds.
MarcoForte/FBA_Matting
Official repository for the paper F, B, Alpha Matting
foamliu/Deep-Image-Matting
Deep Image Matting
dyelax/Adversarial_Video_Generation
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by...
DeepMotionEditing/deep-motion-editing
An end-to-end library for editing and rendering motion of 3D characters with deep learning...