SenseTime-FVG/OpenDWM
An open source code repository of driving world models, with training, inferencing, evaluation tools, and pretrained checkpoints.
This project helps automotive engineers and researchers create realistic, multi-view videos of autonomous driving scenarios. You provide text descriptions and road environment layouts, and it generates diverse videos, complete with various weather conditions, vehicle types, and driving paths. It's designed for anyone developing or testing autonomous vehicle systems who needs to simulate complex driving situations without real-world data collection.
379 stars. No commits in the last 6 months.
Use this if you need to generate high-quality, controllable autonomous driving videos from text and layout conditions to test or train your autonomous systems.
Not ideal if you're looking for real-world sensor data or prefer to work exclusively with physical test tracks.
Stars
379
Forks
46
Language
Python
License
MIT
Category
Last pushed
Jun 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/SenseTime-FVG/OpenDWM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators