worldbench/awesome-vla-for-ad
🌐 Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
This project offers a comprehensive survey of Vision-Language-Action (VLA) models for autonomous driving. It explains how these models integrate real-world visual data and natural language commands to produce driving actions, moving beyond traditional, error-prone modular systems. Robotics engineers and researchers in autonomous vehicle development would use this to understand the current state and future directions of AI-driven self-driving systems.
331 stars.
Use this if you are exploring how to build more robust and intelligent autonomous driving systems that can understand complex scenarios and natural language instructions.
Not ideal if you are looking for an off-the-shelf software library for immediate implementation, as this is a research survey.
Stars
331
Forks
31
Language
HTML
License
MIT
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/worldbench/awesome-vla-for-ad"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
chrisliu298/awesome-llm-unlearning
A resource repository for machine unlearning in large language models
hijkzzz/Awesome-LLM-Strawberry
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.
zjukg/KG-MM-Survey
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
worldbench/awesome-spatial-intelligence
🌐 Forging Spatial Intelligence: A Roadmap of Multi-Modal Data Pre-Training for Autonomous Systems
worldbench/DriveBench
[ICCV 2025] Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability,...