wangqiang9/Awesome-Controllable-Video-Diffusion

Awesome Controllable Video Generation with Diffusion Models

33
/ 100
Emerging

This resource helps animators, content creators, and researchers generate videos where specific elements like human pose, facial expressions, or camera movement are precisely controlled. You can input an image or a description of a character, along with control signals such as a desired pose sequence or an audio track, and it outputs a high-quality video animation. It's ideal for those who need fine-grained command over how subjects move or react in generated video content.

No commits in the last 6 months.

Use this if you need to create animated videos with highly specific control over characters' actions, expressions, or the camera's perspective.

Not ideal if you are looking for a simple text-to-video tool without needing detailed control over the animation's underlying mechanics.

video-animation character-generation motion-control digital-media virtual-production
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

60

Forks

3

Language

License

MIT

Last pushed

Jul 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wangqiang9/Awesome-Controllable-Video-Diffusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.