owadev777/EgoVLA_Release
🎥 Train vision-language-action models using egocentric videos for advanced human-centric interactions and evaluations in AI applications.
Stars
4
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/owadev777/EgoVLA_Release"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
andyzeng/apc-vision-toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object...
OSU-NLP-Group/UGround
[ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents
Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度å¦ä¹ ç›®æ ‡æ£€æµ‹ yolov3 行为检测 opencv PCL 机器å¦ä¹ æ— äººé©¾é©¶
leggedrobotics/wild_visual_navigation
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and...
microsoft/event-vae-rl
Visuomotor policies from event-based cameras through representation learning and reinforcement...