eddyhkchiu/V2V-GoT
[ICRA2026] Official code of the paper "V2V-GoT: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multimodal Large Language Models and Graph-of-Thoughts"
This project helps automotive engineers and researchers develop advanced autonomous driving systems. It takes raw perception features from multiple connected autonomous vehicles (CAVs) as input. Using a Graph-of-Thoughts reasoning framework with multimodal large language models, it generates suggested future trajectories and answers perception or prediction questions. The end-user persona is an autonomous vehicle researcher or engineer focused on cooperative driving.
Use this if you are developing or evaluating cooperative autonomous driving systems and need to integrate multimodal perception with advanced reasoning for vehicle-to-vehicle collaboration.
Not ideal if you are working on single-vehicle autonomy without vehicle-to-vehicle communication or if your primary focus is hardware-level control systems rather than high-level decision-making.
Stars
15
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/eddyhkchiu/V2V-GoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
StanfordASL/Trajectron
Code accompanying "The Trajectron: Probabilistic Multi-Agent Trajectory Modeling with Dynamic...
StanfordASL/Trajectron-plus-plus
Code accompanying the ECCV 2020 paper "Trajectron++: Dynamically-Feasible Trajectory Forecasting...
uber-research/LaneGCN
[ECCV2020 Oral] Learning Lane Graph Representations for Motion Forecasting
agrimgupta92/sgan
Code for "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks",...
devendrachaplot/Neural-SLAM
Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"