CIntellifusion/GeometryForcing
[ICLR26] Official implementation of Geometry Forcing: Marrying Video Diffusion and 3D Representation for Consistent World Modeling
This project helps researchers in computer vision generate videos that are consistent in both motion and 3D structure. It takes an input image and synthesizes a video from it, where objects and scenes maintain their shape and position realistically across frames. This is ideal for those working on realistic 3D scene reconstruction or video synthesis.
160 stars.
Use this if you need to generate high-quality, temporally consistent videos from a single image while ensuring accurate 3D geometry.
Not ideal if your primary goal is simple video generation without strong emphasis on 3D geometric consistency.
Stars
160
Forks
4
Language
Python
License
—
Category
Last pushed
Jan 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/CIntellifusion/GeometryForcing"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators