wangqiang9/Awesome-Controllable-Video-Diffusion
Awesome Controllable Video Generation with Diffusion Models
This resource helps animators, content creators, and researchers generate videos where specific elements like human pose, facial expressions, or camera movement are precisely controlled. You can input an image or a description of a character, along with control signals such as a desired pose sequence or an audio track, and it outputs a high-quality video animation. It's ideal for those who need fine-grained command over how subjects move or react in generated video content.
No commits in the last 6 months.
Use this if you need to create animated videos with highly specific control over characters' actions, expressions, or the camera's perspective.
Not ideal if you are looking for a simple text-to-video tool without needing detailed control over the animation's underlying mechanics.
Stars
60
Forks
3
Language
—
License
MIT
Category
Last pushed
Jul 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wangqiang9/Awesome-Controllable-Video-Diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lixinustc/Awesome-diffusion-model-for-image-processing
one summary of diffusion-based image processing, including restoration, enhancement, coding,...
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, and various other applications.
xlite-dev/Awesome-DiT-Inference
📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization,...
wangkai930418/awesome-diffusion-categorized
collection of diffusion model papers categorized by their subareas
ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models