aiiu-lab/MeDM

Official Pytorch Implementation of "MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance"in AAAI 2024.

29
/ 100
Experimental

This project helps researchers and creative professionals transform existing video footage into new styles or scenes while maintaining smooth motion. You input a source video and a desired style (like a painting or different environment), and it outputs a new video where the content is adapted to the style but the movement remains consistent. This is ideal for video artists, animators, or researchers experimenting with visual content generation.

No commits in the last 6 months.

Use this if you need to stylize or translate a video into a new visual domain while preserving the original motion and temporal consistency.

Not ideal if you are looking to generate entirely new video content from scratch or perform simple video edits like cutting and cropping.

video-style-transfer video-editing creative-arts computer-vision-research animation-production
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

30

Forks

2

Language

Python

License

Last pushed

Apr 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/aiiu-lab/MeDM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.