eddyhkchiu/V2V-GoT

[ICRA2026] Official code of the paper "V2V-GoT: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multimodal Large Language Models and Graph-of-Thoughts"

23
/ 100
Experimental

This project helps automotive engineers and researchers develop advanced autonomous driving systems. It takes raw perception features from multiple connected autonomous vehicles (CAVs) as input. Using a Graph-of-Thoughts reasoning framework with multimodal large language models, it generates suggested future trajectories and answers perception or prediction questions. The end-user persona is an autonomous vehicle researcher or engineer focused on cooperative driving.

Use this if you are developing or evaluating cooperative autonomous driving systems and need to integrate multimodal perception with advanced reasoning for vehicle-to-vehicle collaboration.

Not ideal if you are working on single-vehicle autonomy without vehicle-to-vehicle communication or if your primary focus is hardware-level control systems rather than high-level decision-making.

autonomous-driving vehicle-to-vehicle-communication cooperative-perception motion-planning robotics-research
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

Last pushed

Mar 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/eddyhkchiu/V2V-GoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.