kwsong0113/diffusion-forcing-transformer

[ICML 2025] Official PyTorch Implementation of "History-Guided Video Diffusion"

41
/ 100
Emerging

This project helps anyone needing to create compelling videos from existing images, enabling them to generate both short and extremely long videos with enhanced temporal consistency and realistic motion. You input one or more still images, and the system outputs a high-quality, continuous video that dynamically extends or interpolates the provided visual context. This is ideal for content creators, marketers, or researchers who need to visualize dynamic processes or bring static imagery to life.

637 stars. No commits in the last 6 months.

Use this if you need to generate high-quality, temporally consistent videos from a few starting images, or require stable, long-duration video rollouts from a single frame.

Not ideal if you're looking for a simple drag-and-drop video editor or a tool for complex video manipulations like adding special effects or editing existing video clips.

video-generation content-creation motion-design visual-storytelling image-to-video
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

637

Forks

32

Language

Python

License

Last pushed

Jul 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/kwsong0113/diffusion-forcing-transformer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.