eddyhkchiu/V2V-LLM
[ICRA2026] Official code of the paper "V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multimodal Large Language Models"
This project offers a method for autonomous vehicles to cooperatively share information using large language models. It takes in perception data from multiple connected vehicles and uses an LLM to answer driving-related questions, identifying notable objects, and assisting with planning. This would be used by autonomous driving researchers and engineers developing next-generation cooperative driving systems.
Use this if you are researching or developing advanced cooperative autonomous driving systems and want to explore the integration of multimodal large language models for enhanced perception and decision-making.
Not ideal if you are looking for a complete, production-ready autonomous driving stack for immediate deployment, as this focuses on a research problem setting.
Stars
11
Forks
—
Language
—
License
—
Category
Last pushed
Mar 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eddyhkchiu/V2V-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.