awesome-vla-for-ad and Awesome-Large-Vision-Language-Model
About awesome-vla-for-ad
worldbench/awesome-vla-for-ad
🌐 Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
This project offers a comprehensive survey of Vision-Language-Action (VLA) models for autonomous driving. It explains how these models integrate real-world visual data and natural language commands to produce driving actions, moving beyond traditional, error-prone modular systems. Robotics engineers and researchers in autonomous vehicle development would use this to understand the current state and future directions of AI-driven self-driving systems.
About Awesome-Large-Vision-Language-Model
SuperBruceJia/Awesome-Large-Vision-Language-Model
Awesome Large Vision-Language Model: A Curated List of Large Vision-Language Model
This resource provides a curated collection of materials for anyone exploring or working with large vision-language models, including medical foundation models. It centralizes key papers, presentations, books, and benchmarks related to integrating visual and linguistic data. Researchers and AI practitioners focused on developing or applying advanced AI systems that understand and process both images and text will find this helpful.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work