Orlando-CS/Awesome-VLA
✨✨latest advancements in VLA models(VIsion Language Action)
This collection provides an overview of the latest research and advancements in Vision-Language-Action (VLA) models. It helps researchers and engineers quickly find information on cutting-edge models, relevant papers, and available datasets related to training robots and embodied AI to understand and act based on visual and linguistic input. The main users are AI researchers, robotics engineers, and deep learning practitioners focused on developing autonomous systems.
109 stars.
Use this if you are a researcher or engineer looking for a curated list of the newest VLA models, research papers, and datasets to inform your work in embodied AI and robotics.
Not ideal if you are looking for ready-to-use software, code libraries, or tutorials for implementing VLA models without prior research knowledge.
Stars
109
Forks
4
Language
—
License
—
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Orlando-CS/Awesome-VLA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
FoundationVision/Liquid
(Accepted by IJCV) Liquid: Language Models are Scalable and Unified Multi-modal Generators
Paranioar/Awesome_Matching_Pretraining_Transfering
The Paper List of Large Multi-Modality Model (Perception, Generation, Unification),...
Yangyi-Chen/Multimodal-AND-Large-Language-Models
Paper list about multimodal and large language models, only used to record papers I read in the...
thuml/AutoTimes
Official implementation for "AutoTimes: Autoregressive Time Series Forecasters via Large Language Models"