aiiu-lab/MeDM
Official Pytorch Implementation of "MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance"in AAAI 2024.
This project helps researchers and creative professionals transform existing video footage into new styles or scenes while maintaining smooth motion. You input a source video and a desired style (like a painting or different environment), and it outputs a new video where the content is adapted to the style but the movement remains consistent. This is ideal for video artists, animators, or researchers experimenting with visual content generation.
No commits in the last 6 months.
Use this if you need to stylize or translate a video into a new visual domain while preserving the original motion and temporal consistency.
Not ideal if you are looking to generate entirely new video content from scratch or perform simple video edits like cutting and cropping.
Stars
30
Forks
2
Language
Python
License
—
Category
Last pushed
Apr 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/aiiu-lab/MeDM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators