ThomasVonWu/Awesome-VLMs-Strawberry
A collection of VLMs papers, blogs, and projects, with a focus on VLMs in Autonomous Driving and related reasoning techniques.
This collection helps researchers and engineers in autonomous driving stay current with the latest advancements in Vision-Language Models (VLMs). It compiles academic papers, blog posts, and project repositories focused on how VLMs can enhance self-driving vehicle perception and decision-making. Researchers and development teams working on autonomous vehicle technology will find this useful for understanding emerging trends and foundational concepts.
No commits in the last 6 months.
Use this if you are an autonomous driving researcher or engineer looking for a curated list of resources on Vision-Language Models applied to self-driving technology.
Not ideal if you are looking for an introduction to VLMs in general or VLMs applied to domains other than autonomous driving.
Stars
11
Forks
2
Language
—
License
—
Category
Last pushed
Nov 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ThomasVonWu/Awesome-VLMs-Strawberry"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
chrisliu298/awesome-llm-unlearning
A resource repository for machine unlearning in large language models
worldbench/awesome-vla-for-ad
🌐 Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
hijkzzz/Awesome-LLM-Strawberry
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.
zjukg/KG-MM-Survey
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
worldbench/awesome-spatial-intelligence
🌐 Forging Spatial Intelligence: A Roadmap of Multi-Modal Data Pre-Training for Autonomous Systems