Orlando-CS/Awesome-VLA

✨✨latest advancements in VLA models(VIsion Language Action)

33
/ 100
Emerging

This collection provides an overview of the latest research and advancements in Vision-Language-Action (VLA) models. It helps researchers and engineers quickly find information on cutting-edge models, relevant papers, and available datasets related to training robots and embodied AI to understand and act based on visual and linguistic input. The main users are AI researchers, robotics engineers, and deep learning practitioners focused on developing autonomous systems.

109 stars.

Use this if you are a researcher or engineer looking for a curated list of the newest VLA models, research papers, and datasets to inform your work in embodied AI and robotics.

Not ideal if you are looking for ready-to-use software, code libraries, or tutorials for implementing VLA models without prior research knowledge.

Robotics Research Embodied AI Machine Learning Engineering Autonomous Systems Development AI Model Research
No License No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 7 / 25
Community 7 / 25

How are scores calculated?

Stars

109

Forks

4

Language

License

Last pushed

Feb 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Orlando-CS/Awesome-VLA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.