Awesome-Video-Diffusion-Models and awesome-diffusion-v2v
These are ecosystem siblings — one is a broad survey aggregating video diffusion model research across multiple applications, while the other is a specialized collection focused specifically on the video-to-video translation subset of that broader landscape.
About Awesome-Video-Diffusion-Models
ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models
This project is a comprehensive guide to video diffusion models, helping creative professionals, researchers, and content creators understand the latest advancements in generating and editing videos using AI. It takes various video creation and editing needs as input, and provides a structured overview of tools and techniques to produce desired video content. This resource is for anyone exploring the cutting edge of AI-driven video content.
About awesome-diffusion-v2v
wenhao728/awesome-diffusion-v2v
Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translation. And a video editing benchmark code.
This is a curated collection of cutting-edge research papers and a benchmark for video editing using advanced AI models. It helps video creators and researchers understand and apply techniques that transform existing video footage based on specific instructions. You can input a video and an editing goal, and learn about methods to produce a modified video, allowing for sophisticated visual changes.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work