kwsong0113/diffusion-forcing-transformer
[ICML 2025] Official PyTorch Implementation of "History-Guided Video Diffusion"
This project helps anyone needing to create compelling videos from existing images, enabling them to generate both short and extremely long videos with enhanced temporal consistency and realistic motion. You input one or more still images, and the system outputs a high-quality, continuous video that dynamically extends or interpolates the provided visual context. This is ideal for content creators, marketers, or researchers who need to visualize dynamic processes or bring static imagery to life.
637 stars. No commits in the last 6 months.
Use this if you need to generate high-quality, temporally consistent videos from a few starting images, or require stable, long-duration video rollouts from a single frame.
Not ideal if you're looking for a simple drag-and-drop video editor or a tool for complex video manipulations like adding special effects or editing existing video clips.
Stars
637
Forks
32
Language
Python
License
—
Category
Last pushed
Jul 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/kwsong0113/diffusion-forcing-transformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PRIS-CV/DemoFusion
Let us democratise high-resolution generation! (CVPR 2024)
mit-han-lab/distrifuser
[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
Tencent-Hunyuan/HunyuanPortrait
[CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced...
giuvecchio/matfuse
MatFuse: Controllable Material Generation with Diffusion Models (CVPR2024)
Shilin-LU/TF-ICON
[ICCV 2023] "TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition" (Official...