leggedrobotics/wild_visual_navigation

Wild Visual Navigation: A system for fast traversability learning via pre-trained models and online self-supervision

42
/ 100
Emerging

This system helps mobile robots understand and navigate complex, changing terrain quickly after minimal human guidance. By observing a short human-led demonstration, the robot learns which areas are safe to traverse and which are obstacles, using its visual input. This allows robotics engineers and field operators to rapidly deploy autonomous robots in new, unstructured environments.

268 stars. No commits in the last 6 months.

Use this if you need to rapidly teach a mobile robot how to safely navigate varied, 'wild' terrain by showing it a few minutes of examples.

Not ideal if you are looking for a system that requires no initial human input or for precise, structured indoor navigation.

robotics autonomous-navigation field-robotics terrain-traversability robot-training
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

268

Forks

24

Language

Python

License

MIT

Last pushed

Jul 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/leggedrobotics/wild_visual_navigation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.