awesome-vla-for-ad and awesome-spatial-intelligence

Maintenance 10/25
Adoption 10/25
Maturity 15/25
Community 15/25
Maintenance 10/25
Adoption 10/25
Maturity 15/25
Community 12/25
Stars: 331
Forks: 31
Downloads:
Commits (30d): 0
Language: HTML
License: MIT
Stars: 142
Forks: 12
Downloads:
Commits (30d): 0
Language: HTML
License: MIT
No Package No Dependents
No Package No Dependents

About awesome-vla-for-ad

worldbench/awesome-vla-for-ad

🌐 Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future

This project offers a comprehensive survey of Vision-Language-Action (VLA) models for autonomous driving. It explains how these models integrate real-world visual data and natural language commands to produce driving actions, moving beyond traditional, error-prone modular systems. Robotics engineers and researchers in autonomous vehicle development would use this to understand the current state and future directions of AI-driven self-driving systems.

autonomous-driving robotics self-driving-cars vision-systems vehicle-intelligence

About awesome-spatial-intelligence

worldbench/awesome-spatial-intelligence

🌐 Forging Spatial Intelligence: A Roadmap of Multi-Modal Data Pre-Training for Autonomous Systems

This resource collects the essential information for researchers and engineers working on autonomous systems. It provides a structured overview of methods for teaching self-driving vehicles, robots, and drones to understand their surroundings using various sensors like cameras and LiDAR. By organizing research into clear categories, it helps practitioners grasp different approaches to processing sensor data for better perception and decision-making.

autonomous-driving robotics sensor-fusion perception-systems multi-modal-learning

Scores updated daily from GitHub, PyPI, and npm data. How scores work