Awesome-Multimodal-Large-Language-Models and Awesome-Multimodal-LLM-Autonomous-Driving
These two tools are ecosystem siblings, where B is a specialized application of the broader field surveyed by A, specifically focusing on multimodal large language models within the autonomous driving domain.
About Awesome-Multimodal-Large-Language-Models
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
This resource helps AI researchers and practitioners stay current with the rapidly evolving field of Multimodal Large Language Models (MLLMs). It provides curated lists of significant research papers, comprehensive surveys, and evaluation benchmarks for MLLMs. The intended users are researchers, students, and engineers who are actively working on or studying advanced AI models that integrate different data types like text, images, and audio.
About Awesome-Multimodal-LLM-Autonomous-Driving
IrohXu/Awesome-Multimodal-LLM-Autonomous-Driving
[WACV 2024 Survey Paper] Multimodal Large Language Models for Autonomous Driving
This project offers a comprehensive survey of cutting-edge research using multimodal large language models for autonomous driving systems. It curates a list of papers and resources, showcasing how these advanced AI models process information from various sources, like road images and spoken commands, to make real-time driving decisions. Researchers and engineers in the autonomous vehicle field would use this to stay updated on the latest developments.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work