leggedrobotics/wild_visual_navigation
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and online self-supervision
This system helps mobile robots understand and navigate complex, changing terrain quickly after minimal human guidance. By observing a short human-led demonstration, the robot learns which areas are safe to traverse and which are obstacles, using its visual input. This allows robotics engineers and field operators to rapidly deploy autonomous robots in new, unstructured environments.
268 stars. No commits in the last 6 months.
Use this if you need to rapidly teach a mobile robot how to safely navigate varied, 'wild' terrain by showing it a few minutes of examples.
Not ideal if you are looking for a system that requires no initial human input or for precise, structured indoor navigation.
Stars
268
Forks
24
Language
Python
License
MIT
Category
Last pushed
Jul 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/leggedrobotics/wild_visual_navigation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
andyzeng/apc-vision-toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object...
OSU-NLP-Group/UGround
[ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents
Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
microsoft/event-vae-rl
Visuomotor policies from event-based cameras through representation learning and reinforcement...
RizwanMunawar/trajectory-forcast
Forecast object trajectory based on history of tracks. Provides a stable and computationally...