yifanlu0227/ChatSim
[CVPR2024 Highlight] Editable Scene Simulation for Autonomous Driving via LLM-Agent Collaboration
ChatSim helps autonomous driving engineers and researchers create and modify realistic driving scenarios. You input real-world driving footage and a description of changes you want to make, like adding cars or altering weather. The output is a new, simulated video sequence incorporating those edits, allowing you to test how self-driving systems react to diverse and custom situations without needing physical road tests.
419 stars. No commits in the last 6 months.
Use this if you need to rapidly generate and test variations of autonomous driving scenarios in a simulated environment by editing existing video footage with text commands.
Not ideal if you need to simulate entirely new environments from scratch or require absolute physical accuracy beyond visual realism for specific engineering analyses.
Stars
419
Forks
28
Language
Python
License
—
Category
Last pushed
Dec 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yifanlu0227/ChatSim"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model