YvanYin/DrivingWorld
Code for "DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT"
This project helps autonomous driving system developers simulate future driving scenarios. It takes existing video footage and vehicle telemetry (ego state) to predict how the environment and the vehicle's position will evolve over a long duration. An AI research engineer or simulation specialist would use this to test and refine autonomous driving models.
238 stars. No commits in the last 6 months.
Use this if you need to generate realistic, long-duration future driving videos and vehicle states to test and validate autonomous driving systems.
Not ideal if you are looking for real-time control of a physical autonomous vehicle or a tool for general video prediction outside of driving scenarios.
Stars
238
Forks
24
Language
Python
License
MIT
Category
Last pushed
Jan 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/YvanYin/DrivingWorld"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model