NVIDIA-ISAAC-ROS/isaac_ros_freespace_segmentation

NVIDIA-accelerated, deep-learned freespace segmentation

40
/ 100
Emerging

This project helps mobile robots understand their surroundings and avoid obstacles by identifying open areas for navigation. It takes real-time left and right stereo camera images, combined with the robot's position relative to the ground, and outputs an occupancy grid. This grid tells the robot where it can safely move, allowing for robust, vision-based obstacle avoidance. Robotics engineers and autonomous system developers would use this.

Use this if you need a real-time, vision-based system to detect walkable areas and obstacles for ground-based mobile robots, especially as a complement to existing lidar systems.

Not ideal if your robot does not have stereo cameras or if you primarily rely on non-visual sensors for obstacle detection.

mobile-robotics autonomous-navigation obstacle-avoidance robot-perception occupancy-mapping
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

43

Forks

4

Language

C++

License

Apache-2.0

Last pushed

Dec 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/NVIDIA-ISAAC-ROS/isaac_ros_freespace_segmentation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.